Keynotes


Responsible Data Management: Ethics, fairness, and bias issues in querying and analytics of human-centric data

Gautam Das

Dr. Gautam Das

Associate Dean for Research
University of Texas at Arlington, USA
gdas@cse.uta.edu

Biography:
Dr. Gautam Das
is the Associate Dean for Research, College of Engineering, a Distinguished University Chair Professor of Computer Science and Engineering, Director of the Center for Artificial Intelligence and Big Data (CARIDA), and Director of the Database Exploration Laboratory (DBXLAB) at UT-Arlington. Prior to joining UTA in 2004, he has held positions at Microsoft Research, Compaq Corporation and the University of Memphis. He graduated with a B.Tech in computer science from IIT Kanpur, India in 1983, and with a Ph.D in computer science from the University of Wisconsin, Madison in 1990. He is a Fellow of the IEEE and a Fellow of the ACM.

In this talk, we focus on fairness issues that arise during querying and analysis of human-centric data. For example, a user may use such queries to retrieve suitable employment opportunities in a jobs database, dating partners in a matching website, or apartments to rent in a real estate database. We will discuss how such querying mechanisms can give sometimes give results that are discriminatory, and discuss approaches to detect, mitigate and prevent such scenarios from occurring. Our work represents some of the initial steps towards the broader goal of integrating responsible approaches into data management processes that deal with human-centric data.



ARNA- the Adaptive Robot Nursing Assistant for Hospital Walking and Sitting

Dan Popa

Dr. Dan Popa

Director
Louisville Automation and Robotics Research Institute (LARRI), USA
dan.popa@louisville.edu

Biography:
Dr. Dan Popa has over 30 years of research experience in robotics and automation. His early research work included adaptive force control and motion planning for nonholonomic robots. In 1998, he joined the Center for Automation Technologies at Rensselaer Polytechnic Institute, as a Research Scientist, where he focused on precision robotics and micromanufacturing. In 2004, he became an Assistant and then an Associate Professor of Electrical Engineering at the University of Texas at Arlington. Since 2016, he has been the Vogt Endowed Chair in Advanced Manufacturing and a Professor of Electrical and Computer Engineering at University of Louisville. He is currently the Director of the Louisville Automation and Robotics Research Institute (LARRI) and the Head of the Next Generation Research Group (NGS) conducting research in two main areas: 1) social and physical human-robot interaction through adaptive interfaces and robot tactile skins; and 2) the design, characterization, modeling, and control of microscale and precision robotic systems. Dr. Popa is the the author of over 300 peer reviewed conference and journal articles, mainly in IEEE and ASME publications. He has been very active in the IEEE Robotics and Automation Society (RAS), including extensive competition, workshop, conference, and journal service.

In this talk we will present recent progress in the performance of delivering walking and sitting services with our home-grown robot nursing assistant (ARNA). The robot was conceived and built in our lab with support from NSF's PFI:BIC, ICORPS, and FW-HTF programs over the last decade. The robot is adapting to the user via two innovative controllers, namely the neuroadaptive controller (NAC) enabling physical interaction, and the Genetic User Interface (GUI) enabling telemanipulation. We summarize experimental results of testing ARNA with nearly 100 nursing students demonstrating acceptance and increased performance compared to traditional methods of hospital care.



Interactive Robot Perception and Learning for Mobile Manipulation

Georgia Chalvatzaki

Dr. Georgia Chalvatzaki

Professor for Interactive Robot Perception & Learning (PEARL),
Computer Science Department, Technische Universität Darmstadt
chalvatzaki@tu-darmstadt.de

Biography:
Dr. Georgia Chalvatzaki, Full Professor of Interactive Robot Perception and Learning, holds a joint appointment at the Computer Science Department of the Technical University of Darmstadt and Hessian.AI. Prior to this, she served as an Assistant Professor and Independent Research Group Leader, securing the prestigious Emmy Noether grant from the German Research Foundation (DFG) in March 2021. She completed her Ph.D. in 2019 at the National Technical University of Athens, Greece, where she was a part of the Intelligent Robotics and Automation Lab within the Electrical and Computer Engineering School. Her doctoral thesis, titled "Human-Centered Modeling for Assistive Robotics: Stochastic Estimation and Robot Learning in Decision-Making," laid the foundation for her current research interests, which include robot learning, planning, and perception.

The long-standing ambition for autonomous, intelligent service robots that are seamlessly integrated into our everyday environments is yet to become a reality. Humans develop comprehension of their embodiments by interpreting their actions within the world and acting reciprocally to perceive it — the environment affects our actions, and our actions simultaneously affect our environment. Besides great advances in robotics and Artificial Intelligence (AI), e.g., through better hardware designs or algorithms incorporating advances in Deep Learning in robotics, we are still far from achieving robotic embodied intelligence. The challenge of attaining artificial embodied intelligence — intelligence that originates and evolves through an agent's sensorimotor interaction with its environment — is a topic of substantial scientific investigation and is still an open challenge. In this talk, I will walk you through our recent research works for endowing robots with spatial intelligence through perception and interaction to coordinate and acquire skills that are necessary for their promising real-world applications. In particular, we will see how we can use robotic priors for learning to coordinate mobile manipulation robots, how neural representations can allow for learning policies and safe interactions, and, at the crux, how we can leverage those representations to allow the robot to understand and interact with a scene, or guide it to acquire more “information” while acting in a task-oriented manner.



Towards Video Situation Analysis

Dr. Sharma Chakravarthy

Dr. Sharma Chakravarthy

Professor CSE Department and IT Lab,
The University of Texas at Arlington, USA
sharmac@cse.uta.edu

Biography:
Dr. Sharma Chakravarthy is an ACM Distinguished Scientist, an IEEE Senior Member and a Fulbright specialist who has worked at the Rome Air Force Research Laboratory (AFRL) as a Faculty Fellow researching continuous query processing over fault-tolerant networks and video stream analysis. His current research includes big data analysis using multi-layered networks, stream data processing for disparate domains (e.g., video analysis), scaling graph mining algorithms for analyzing very large social and other networks, active and real-time databases, distributed and heterogeneous databases, query optimization (single, multiple, logic-based, and graph), and multi-media databases. Prof. Chakravarthy received the B.E. degree in Electrical Engineering from the Indian Institute of Science, Bangalore, and M. Tech from IIT Bombay, India. He received M.S. and Ph.D. degrees from the Univ. of Maryland in College Park in 1981 and 1985, respectively.

In this talk, an out-of-the-box approach to video analysis is presented. Videos have become ubiquitous due to the availability of hand-held devices, dash cams, and monitoring gadgets. However, video analysis imperative for “real-world situation monitoring” is way behind the rapid growth of video acquisition. The aim is to enable video acquisition capability that can detect situations, such as Pelosi's house break-in, Las Vegas shooting, and innumerable other events. Our work on video situation monitoring tries to address the above in two steps: I) establishing a robust framework for situation analysis of videos by extending stream and continuous query processing techniques and ii) adding “real-time” capability to the framework. Ability to ask “ad hoc” and “what-if” queries on videos is important. Applicability across domains (domain-independence) is critical. In this presentation, we will discuss our approach in terms of video content extraction, modeling of contents using different models, and alternative approaches for video analysis over the two models. In addition, some of our earlier work on “real-time” processing relevant to this work will also be discussed.