Important Note: In response to the ongoing pandemic, the MISP 2021 conference will be held in online mode. All timings are as per Indian Standard Time (IST) (UTC + 05:30)
Welcome to the MISP-2021 Virtual Conference
Enter the virtual conference through the following URL as a participant:
Login URL: https://www.misp.nitap.ac.in/mispvirtualconferenceplateform (Note: If you face any logging problem, please contact email@example.com)
If you are a Session Chair, Speaker or Paper Presenter of a session, please go through the program schedule and enter the respective hall using one of the following links. Kindly join 15 minutes before the beginning of the session.
(Note: If you face any logging problem, please contact firstname.lastname@example.org)
Sept 23-25, 2021: Main Conference program (6 keynote talks, 1 Tutorial, 133 Research papers under 19 research sessions)
Prof. Witold Pedrycz
University of Alberta, Edmonton, Canada
Title: Design and Analysis of Interpretable Models with Federated Learning
Abstract: In data analytics, system modeling, and managerial decision-making models, the aspects of interpretability and explainability are of paramount relevance, just to refer here to explainable Artificial Intelligence (XAI). They are especially timely in light of the increasing complexity of systems one has to cope with and ultimate concerns about privacy and security of data and models. With the omnipresence of mobile devices, distributed data, and security and privacy restrictions, federated learning becomes a feasible alternative.
We advocate that there are two factors that immensely contribute to the realization of the above important requirements, namely, (i) a suitable level of abstraction along with its hierarchical aspects in describing the problem and (ii) a logic fabric of the resultant construct. It is demonstrated that their conceptualization and the following realization can be conveniently carried out with the use of information granules (for example, fuzzy sets, sets, rough sets, and alike).
Information granules are building blocks forming the interpretable environment capturing the essence of data and key relationships existing there. Their emergence is supported by a systematic and focused analysis of data. At the same time, their initialization is specified by stakeholders or/and the owners and users of data. We present a comprehensive discussion of information granules-oriented design of information granules and their description.
We formulate the principles and investigate novel algorithms of federated learning being applied to the formation of conditions and conclusions of rules in rule-based models, and analyze the impact of their information granularity on effectiveness of learning, required communication bandwidth and feedback mechanisms in the learning client-server loop. We investigate ways of evaluating the performance of the model obtained through federated learning by engaging mechanisms of Granular Computing and analyze granular evaluation measures to assess the quality of the model both at the client and server level.
Short Bio: Witold Pedrycz is a Professor and Canada Research Chair (CRC -Computational Intelligence) in the Department of Electrical and Computer Engineering, University of Alberta, Edmonton, Canada. He is also with the Systems Research Institute of the Polish Academy of Sciences, Warsaw, Poland. In 2009 Dr. Pedrycz was elected a foreign member of the Polish Academy of Sciences. He main research directions involve Computational Intelligence, fuzzy modeling and Granular Computing, knowledge discovery and data mining, fuzzy control, pattern recognition, knowledge-based neural networks, relational computing, and Software Engineering. He has published numerous papers in this area. He is also an author of 14 research monographs covering various aspects of Computational Intelligence and Software Engineering. Witold Pedrycz has been a member of numerous program committees of IEEE conferences in the area of fuzzy sets and neurocomputing.
Dr. Pedrycz is intensively involved in editorial activities. He is an Editor-in-Chief of Information Sciences and Editor-in-Chief of IEEE Transactions on Systems, Man, and Cybernetics - part A. He currently serves as an Associate Editor of IEEE Transactions on Fuzzy Systems and is a member of a number of editorial boards of other international journals. In 2007 he received a prestigious Norbert Wiener award from the IEEE Systems, Man, and Cybernetics Council. He is a recipient of the IEEE Canada Computer Engineering Medal 2008. In 2009 he has received a Cajastur Prize for Soft Computing from the European Centre for Soft Computing for "pioneering and multifaceted contributions to Granular Computing".
Prof. P. Nagabhushan
Director, Indian Institute of Information Technology Allahabad, Prayagraj (U.P.), India
Title: Machine Learning – Intelligent Systems: EXPECTATIONS
Abstract: AI, and Machine Learning have been receiving such a greater attention, creating a sensation that for every problem the solution would be provided by Machine Learning and AI. The keynote proposes to dwell upon the realities and expectations in this context. It is necessary to understand, how machine learning/artificial intelligence can assure the confidence that the action executed by the machine/algorithm/intelligent system is trustworthy. The issue would be to assert that machine learning should accomplish stable and highly accurate performance under diverse situations.
Prof. P. Nagabhushan, presently the Director of Indian Institute of Information Technology Allahabad, Prayagraj (An Institute of National Importance by the Act of Parliament), was earlier Chief Nodal Officer, Dean, and Chairman with various academic administrative responsibilities, and with academic reformation activities at University of Mysore, Mysore. Being the founder Professor of the Department of Studies in Computer Science, University of Mysore, he focused on moulding the department as a learning and research focused department. He was responsible for shaping the department as a centre of excellence in Computer Cognition and Recognition covering the areas of Pattern Recognition, Image Processing, Intelligence and Learning. Earlier to this, he was coordinator of M.Tech. program in SJ College of Engineering, Mysore.
Prof. P. Nagabhushan remained enthusiastically active in implementing continuous learning, continuous assessment, and choice based credits earning since 2000 at department level and since 2010 at University of Mysore, and since 2017 at Indian Institute of Information Technology Allahabad, which are now being promoted by NEP2020. He has supervised 33 Ph.D. scholars and has authored more than 200 Journal papers totaling more than 500 research papers. He was an invited academician and researcher at USA, JAPAN, FRANCE, SUDAN. He was the Investigator of several research projects funded by UGC, MHRD, AICTE, ICMR, ISRO, IFCAR, DRDO and MHA. He has received many awards for his academic roles, and he is the recipient of fellowships from Institute of Engineers (FIE), Institute of Electronics and Telecommunication Engineers (FIETE), and International Academy of Physical Sciences (FIAPS). His Google Scholar Citation is 2838 with H-Index 27 and i10-Index 81.
Brief Profile Link- https://www.iiita.ac.in/administration/director_profile/
Prof. Kai-Lung Hua
National Taiwan University of Science and Technology, Taiwan
Title: Small Data Learning with the Applications of Anomaly Detection and Skeleton-based Action Recognition.
Abstract: In the past decade, the computer vision community witnessed the deep learning revolution that fueled several breakthroughs in the field, solving various visual tasks that were previously intractable. However, these breakthroughs were mostly a product of very deep networks with millions of parameters and large amounts of annotated data required to train them. This requirement makes it difficult to adopt these models in domains where several constraints hinder the collection of large and diverse annotated data sets. In this talk, I will present novel solutions to the problems of small data learning with the applications of anomaly detection and skeleton-based action recognition. First, I will introduce the proposed paMAE, an anomaly detection model that incorporates an external memory and a deep spatial perceptual distance that is more robust to shifts and small inaccuracies, which can stem from the lack of data coverage. Secondly, I'll present a new self-training GCN-based framework to reduce the usage of labeled skeleton data and avoid the under-fitting problem which causes by different sizes of pseudo-label data.
Short Bio: Dr. Kai-Lung Hua is a professor of the Dept. of Computer Science and Information Engineering of National Taiwan University of Science and Technology. Dr Hua pursued his Ph.D from School of Electrical and Computer Engineering, Purdue University, USA. His current affiliations are Vice Dean, College of Electrical Engineering and Computer Science, Taiwan Tech, Director, Research Center for Information Technology Innovation, Taiwan Tech and Professor, CSIE Dept., Taiwan Tech. Dr. Hua has so far received several awards, honours and nominations for his academic works. His research interests include multimedia big data and deep learning, computer vision and pattern recognition, data science and machine learning, social media, mobile multimedia applications, app development. He has over 1550 citations for his research works.
Prof. Witold Pedrycz
University of Alberta, Edmonton, Canada
Prof. Witold Pedrycz
University of Alberta, Edmonton, Canada
Prof. P. N. Suganthan
Nanyang Technological University, Singapore
Title: Randomization Based Deep and Shallow Learning for Classification of Tabular Data.
Abstract: This talk will first introduce randomization-based neural networks. Subsequently, the origin of randomization-based neural networks will be presented. The popular instantiation of the feedforward model called random vector functional link neural network (RVFL) originated in the early-1990s. Other randomized feedforward models that will be briefly mentioned are random weight neural networks (RWNN), extreme learning machines (ELM), stochastic configuration network (SCN), broad learning system (BLS), etc. Recently developed deep implementations of the RVFL will be presented in detail. The talk will also include extensive benchmarking studies using tabular classification datasets with respect to state-of-the-art classifiers.This talk will first introduce randomization-based neural networks. Subsequently, the origin of randomization-based neural networks will be presented. The popular instantiation of the feedforward model called random vector functional link neural network (RVFL) originated in the early-1990s. Other randomized feedforward models that will be briefly mentioned are random weight neural networks (RWNN), extreme learning machines (ELM), stochastic configuration network (SCN), broad learning system (BLS), etc. Recently developed deep implementations of the RVFL will be presented in detail. The talk will also include extensive benchmarking studies using tabular classification datasets with respect to state-of-the-art classifiers.
Short Bio: Ponnuthurai Nagaratnam Suganthan received the B.A degree, Postgraduate Certificate and M.A degree in Electrical and Information Engineering from the University of Cambridge, UK in 1990, 1992 and 1994, respectively. He received an honorary doctorate (i.e. Doctor Honoris Causa) in 2020 from University of Maribor, Slovenia. After completing his PhD research in 1995, he served as a pre-doctoral Research Assistant in the Dept of Electrical Engineering, University of Sydney in 1995–96 and a lecturer in the Dept of Computer Science and Electrical Engineering, University of Queensland in 1996–99. He moved to Singapore in 1999. He was an Editorial Board Member of the Evolutionary Computation Journal, MIT Press (2013-2018). He is/was an associate editor of the Applied Soft Computing (Elsevier, 2018-), Neurocomputing (Elsevier, 2018-), IEEE Trans on Cybernetics (2012 - 2018), IEEE Trans on Evolutionary Computation (2005 - ), Information Sciences (Elsevier) (2009 - ), Pattern Recognition (Elsevier) (2001 - ) and IEEE Trans on SMC: Systems (2020 - ) Journals. He is a founding co-editor-in-chief of Swarm and Evolutionary Computation (2010 - ), an SCI Indexed Elsevier Journal. His co-authored SaDE paper (published in April 2009) won the "IEEE Trans. on Evolutionary Computation outstanding paper award" in 2012. His former PhD student, Dr Jane Jing Liang, won the IEEE CIS Outstanding PhD dissertation award, in 2014. His research interests include randomization-based learning methods, swarm and evolutionary algorithms, pattern recognition, deep learning and applications of swarm, evolutionary & machine learning algorithms. He was selected as one of the highly cited researchers by Thomson Reuters Science Citations yearly from 2015 to 2020 in computer science. He served as the General Chair of the IEEE SSCI 2013. He has been a member of the IEEE (S'90, M'92, SM'00, Fellow 2015) since 1991 and an elected AdCom member of the IEEE Computational Intelligence Society (CIS) in 2014-2016. He is an IEEE CIS distinguished lecturer (DLP) in 2018-2021.
Dr. Kamiya Khatter
Editor, Springer Nature
Title: Academic Books Publication with Springer Nature: Current and Future Trends.
Abstract: • Introduction of Springer Nature
• Types of scholarly books and series
• The Book Publishing Process
• Book proposal
• Manuscript Preparation
• Open Access
• Post Publication
• Future of Books
Short Bio: Dr. Kamiya Khatter is presently working as Editor at Springer Nature. She is a part of the Global Acquisition Team at Springer, an imprint of Springer Nature. She is responsible for acquisition and publishing of books in Engineering and Applied Sciences portfolio. She has worked in various areas of academic publishing for both books and journals in the past.
Dr. Mahardhika Pratama
Nanyang Technological University, Singapore
Title: Continual and Autonomous Learning Machine for Lifelong Learning of Data Streams.
Abstract: Continual learning of data streams is a growing research area where the underlying goal is to design a machine learning algorithm which can handle streaming tasks in a never-ending fashion. The machine learning algorithm is supposed to accumulate knowledge from all encountered tasks without catastrophic forgetting problem thus increasing its intelligence as the number of tasks. In this talk, we will start from the data stream problem which can be understood as continual learning of a single task. We will discuss various issues of data streams before jumping to the continual learning problem where a model is exposed to a sequence of different tasks. We will introduce deep learning algorithms addressing the abovementioned problems.
Short Bio: Dr. Mahardhika Pratama is an assistant professor at School of CSE, Nanyang Technological University, Singapore from 2017. His research interest encompasses continual learning, data stream mining, fuzzy machine learning, intelligent control system where he has excellent publication track record in top publication venues such as AAAI, SIGKDD, CIKM, ICDM, SDM, IEEE TCYB, IEEE TNNLS and IEEE TFS. He has led a special issue on autonomous machine learning (AML) in Information Sciences and a workshop on AML in ICDM 2019. Dr. Pratama currently serves as an associate editor in numerous top journals: IEEE TFS, Information Sciences, Knowledge-based Systems, Evolving Systems, Complexity, Journal of Control and Decision as well as an editor in-chief of International Journal of Business Intelligence and Data Mining. He was a program chair of INNS BDDL 2019 and a local chair of ICBK 2019. During his career, Dr. Pratama has secured up to $2 millions in research funding. He has graduated 6 PhD students to completion in timely fashion and 6 postdoctoral research fellows. Because of his excellent research work, he received IEEE TFS prestigious publication award in 2019 and Amity researcher award in data streams in 2019. Last but not least, this achievement is attained only six years after Dr. Pratama obtained his PhD from UNSW in 2014.
Dr. Ram Prasad K
Title: Tutorial on Transformers, an advanced deep learning architecture.
Abstract: Many state of the art results in Computer Vision and Natural Language Processing are currently achieved using an advanced deep learning architecture referred to as Transformers. It is a neural network architecture solely based on self-attention mechanism and is highly parallelizable in nature. Transformers handle variable size inputs using stacks of self-attention layers. It doesn't makes assumptions about the temporal or spatial relationships across the data. It can learn long-range dependencies. In this tutorial, we will dive into conceptual insights and mathematical intuitions behind this advanced architecture and look at example implementations of transformers for both text and image classification.
Short Bio: Dr Ram Prasad received his Ph.D degree in Computer Science Engineering from Autonomous University of Madrid, Spain, and pursued his postdoctoral research work at Dublin City University, Ireland. His doctoral and postdoctoral research works were funded by the prestigious European Union Marie-Curie Fellowship. His doctoral research work got nominated for the prestigious European Biometrics and Research Award. He was a visiting researcher at University of Twente, The Netherlands, and University of Halmstad, Sweden. He graduated in Computer Science from Chennai Mathematical Institute, India.
His expertise is in the areas of classical machine learning, deep learning, computer vision, biometrics, software developments and algorithm design. He was the founder and director of VisionCog Research and Development Private Limited, India and managed commercial R&D works in the domain of computer vision and biometrics. Currently he is an Assistant Professor of Computer Science at Shiv Nadar University Chennai, and focus his teaching and research in the domain of Artificial Intelligence. His primary research interests are in the Mathematical Optimizations for Machine Learning, Explainable AI and Probabilistic Machine Learning.