Computer Science Department
University of Crete, Greece
In this talk, we provide an overview of our work on computational methods for tracking human motion and for the semantic interpretation of human activities, based on unobtrusive computer vision techniques that rely on the processing and analysis of markerless visual data. We focus on tracking the 3D position, orientation and full articulation of the human body and human body parts and we show how this is employed to solve problems of varying complexity, ranging from 3D tracking of a hand (possibly in interaction with objects) up to action recognition, gesture interpretation and intention prediction. Finally, we show how our work can support the development of vision systems aiming at intuitive human-robot interaction and human-robot collaboration as well as the development of interactive exhibits in the context of smart environments.
Antonis Argyros is a Professor of Computer Science at the Computer Science Department (CSD), University of Crete (UoC) and a researcher at the Institute of Computer Science (ICS), Foundation for Research and Technology-Hellas (FORTH) in Heraklion, Crete, Greece. Since 1999, as a member of the Computational Vision and Robotics Laboratory (CVRL) of ICS-FORTH, he has been involved in several European and national RTD projects on computer vision, pattern recognition, image analysis and robotics. His current research interests fall in the areas of computer vision and pattern recognition, with emphasis on the analysis of humans in images and videos, human pose analysis, recognition of human activities and gestures, 3D computer vision, as well as image motion and tracking. He is also interested in applications of computer vision in the fields of robotics and smart environments. In these areas, he has published several research papers in scientific journals and refereed conference proceedings and has delivered invited talks in international events, universities and research centers. Antonis Argyros has served in the organizing and program committees of several international vision, graphics and robotics conferences and in the editorial boards of computer vision, image analysis and robotics journals.
Dean J. Krusienski
Department of Biomedical Engineering
Virginia Commonwealth University, USA
Brain-computer interfaces (BCIs) and related neuroprosthetics are systems that decode and provide real-time feedback of ongoing brain activity. Such technologies can be used in assistive, rehabilitative, augmentative, diagnostic, or therapeutic applications. This lecture will highlight recent progress in invasive and noninvasive BCI research in humans, including speech decoding from intracranial signals and EEG-based neurofeedback for immersive virtual reality.
Dean J. Krusienski received the B.S., M.S., and Ph.D. degrees in electrical engineering from The Pennsylvania State University, University Park, PA, USA. He conducted postdoctoral research in the Brain-Computer Interface Laboratory, Wadsworth Center of the New York State Department of Health. He is currently a Professor and Graduate Program Director in the Department of Biomedical Engineering at Virginia Commonwealth University (VCU), Richmond, VA, USA, where he directs the Advanced Signal Processing in Engineering and Neuroscience (ASPEN) Lab. His research interests include brain–computer interfaces, neural signal analysis, machine-learning, and applications to virtual/augmented reality. His lab has received support from NSF, NIH, and NIA/NASA.
Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS, Germany
This presentation gives an overview to AI platforms in Europe. The strategic role and importance of AI platform is increasing. Users expect the availability of intelligent tools, algorithms and data in easy and open way. Therefore, several platform activities are on the way. The AI4EU platform will be the leading AI On-Demand platform of Europe providing information, social networking capabilities, a one-stop-shop of European AI resources and a collaborative AI experimentation area. Other European AI-related platforms are also on the way and focus on specific areas, like the European Language Grid platform. Also, on the national level several activities are started yet. Fraunhofer is leading the SPEAKER initiative, which provides a platform for speech/voice assistance systems. All these platforms should keep the European sovereignty in the context of AI related data and technology.
Dr. Joachim Köhler received his diploma and Dr.-Ing. degree in Communication Engineering from the RWTH Aachen and Munich University of Technology in 1992 and 2000, respectively. In 1993 he joined the Realization Group of ICSI in Berkeley where he investigated robust speech processing algorithms. From 1994 until 1999 he worked in the speech group of the research and development centre of the SIEMENS AG in Munich. The topic of his PhD thesis is multilingual speech recognition and acoustic phone modelling. Since June 1999 he is with Fraunhofer IAIS in Sankt Augustin and head of the department NetMedia. The research focus of NetMedia lies in the area of multimedia indexing and search methods and applications. His current research interests include pattern recognition, machine/deep learning, speech recognition, spoken document and multimedia retrieval and cloud-based multimedia information architectures. He was recently technical coordinator of the European IP-project Linked-TV. Now he acts as technical manager in the AI4EU project, building a European AI on-demand platform. Finally, he leads the SPEAKER project: A speech assistant platform – Made in Germany.