Keynotes


Responsible Data Management: Ethics, fairness, and bias issues in querying and analytics of human-centric data

Gautam Das

Dr. Gautam Das

Associate Dean for Research
University of Texas at Arlington, USA
gdas@cse.uta.edu

Biography:
Dr. Gautam Das
is the Associate Dean for Research, College of Engineering, a Distinguished University Chair Professor of Computer Science and Engineering, Director of the Center for Artificial Intelligence and Big Data (CARIDA), and Director of the Database Exploration Laboratory (DBXLAB) at UT-Arlington. Prior to joining UTA in 2004, he has held positions at Microsoft Research, Compaq Corporation and the University of Memphis. He graduated with a B.Tech in computer science from IIT Kanpur, India in 1983, and with a Ph.D in computer science from the University of Wisconsin, Madison in 1990. He is a Fellow of the IEEE and a Fellow of the ACM.

In this talk, we focus on fairness issues that arise during querying and analysis of human-centric data. For example, a user may use such queries to retrieve suitable employment opportunities in a jobs database, dating partners in a matching website, or apartments to rent in a real estate database. We will discuss how such querying mechanisms can give sometimes give results that are discriminatory, and discuss approaches to detect, mitigate and prevent such scenarios from occurring. Our work represents some of the initial steps towards the broader goal of integrating responsible approaches into data management processes that deal with human-centric data.



ARNA- the Adaptive Robot Nursing Assistant for Hospital Walking and Sitting

Dan Popa

Dr. Dan Popa

Director
Louisville Automation and Robotics Research Institute (LARRI), USA
dan.popa@louisville.edu

Biography:
Dr. Dan Popa has over 30 years of research experience in robotics and automation. His early research work included adaptive force control and motion planning for nonholonomic robots. In 1998, he joined the Center for Automation Technologies at Rensselaer Polytechnic Institute, as a Research Scientist, where he focused on precision robotics and micromanufacturing. In 2004, he became an Assistant and then an Associate Professor of Electrical Engineering at the University of Texas at Arlington. Since 2016, he has been the Vogt Endowed Chair in Advanced Manufacturing and a Professor of Electrical and Computer Engineering at University of Louisville. He is currently the Director of the Louisville Automation and Robotics Research Institute (LARRI) and the Head of the Next Generation Research Group (NGS) conducting research in two main areas: 1) social and physical human–robot interaction through adaptive interfaces and robot tactile skins; and 2) the design, characterization, modeling, and control of microscale and precision robotic systems. Dr. Popa is the the author of over 300 peer reviewed conference and journal articles, mainly in IEEE and ASME publications. He has been very active in the IEEE Robotics and Automation Society (RAS), including extensive competition, workshop, conference, and journal service.

n this talk we will present recent progress in the performance of delivering walking and sitting services with our home-grown robot nursing assistant (ARNA). The robot was conceived and built in our lab with support from NSF’s PFI:BIC, ICORPS, and FW-HTF programs over the last decade. The robot is adapting to the user via two innovative controllers, namely the neuroadaptive controller (NAC) enabling physical interaction, and the Genetic User Interface (GUI) enabling telemanipulation. We summarize experimental results of testing ARNA with nearly 100 nursing students demonstrating acceptance and increased performance compared to traditional methods of hospital care.



Interactive Robot Perception and Learning for Mobile Manipulation

Georgia Chalvatzaki

Dr. Georgia Chalvatzaki

Professor for Interactive Robot Perception & Learning (PEARL),
Computer Science Department, Technische Universität Darmstadt
maragos@cs.ntua.gr

Biography:
Dr. Georgia Chalvatzaki Dr. Georgia Chalvatzaki, Full Professor of Interactive Robot Perception and Learning, holds a joint appointment at the Computer Science Department of the Technical University of Darmstadt and Hessian.AI. Prior to this, she served as an Assistant Professor and Independent Research Group Leader, securing the prestigious Emmy Noether grant from the German Research Foundation (DFG) in March 2021. She completed her Ph.D. in 2019 at the National Technical University of Athens, Greece, where she was a part of the Intelligent Robotics and Automation Lab within the Electrical and Computer Engineering School. Her doctoral thesis, titled "Human-Centered Modeling for Assistive Robotics: Stochastic Estimation and Robot Learning in Decision-Making," laid the foundation for her current research interests, which include robot learning, planning, and perception.

The long-standing ambition for autonomous, intelligent service robots that are seamlessly integrated into our everyday environments is yet to become a reality. Humans develop comprehension of their embodiments by interpreting their actions within the world and acting reciprocally to perceive it ---- the environment affects our actions, and our actions simultaneously affect our environment. Besides great advances in robotics and Artificial Intelligence (AI), e.g., through better hardware designs or algorithms incorporating advances in Deep Learning in robotics, we are still far from achieving robotic embodied intelligence. The challenge of attaining artificial embodied intelligence — intelligence that originates and evolves through an agent's sensorimotor interaction with its environment — is a topic of substantial scientific investigation and is still an open challenge. In this talk, I will walk you through our recent research works for endowing robots with spatial intelligence through perception and interaction to coordinate and acquire skills that are necessary for their promising real-world applications. In particular, we will see how we can use robotic priors for learning to coordinate mobile manipulation robots, how neural representations can allow for learning policies and safe interactions, and, at the crux, how we can leverage those representations to allow the robot to understand and interact with a scene, or guide it to acquire more “information” while acting in a task-oriented manner.