Day 2 :
Swarm Technology, USA
Time : 10:00-10:45
Alfonso Iniguez is the Founder of Swarm Technology, which is a company that focuses on intent-based computing and swarm robotics. He has published research papers in the areas of distributed artificial intelligence, computer modeling and design verification. Using inspiration from ants and octopuses, he originated the five principles of swarm intelligence. His patented technology enables dynamic addition of processors for uninterrupted distributed processing within intent-based IoT edge processing and swarm robotics. He has worked in diverse engineering positions within: Motorola, Free scale, Integrated Device Technology, and Microchip Technology. He holds a MS degree in Electrical Engineering from the University of Arizona, and a BS degree in Computer Engineering from the Universidad Autónoma de Guadalajara, Mexico.
Background: Various companies and academic institutions are actively researching the field of swarm robotics. A survey on the topic reveals two distinct approaches: A. Each swarm member behaves autonomously without a central computer e.g. Harvard University’s 1024 Robot Swarm. B. Each swarm member is controlled through a central computer, e.g. Intel’s drones showcased by Disney’s light show and Super Bowl 2017.
Description of the Problem: In the case of A, the system falls into the realm of flocking behavior. This system suffers from: 1. Awareness: members are not aware of their available capabilities. 2. Autonomy: members must be told what to do. 3. Solidarity: members lack the ability to accomplish a mission using collective intelligence. In the case of B, members are slaves in a system controlled by a central computer. This system suffers from: 4. Expandability: members cannot be added dynamically. 5. Resiliency: the system lacks the ability to self-heal when members are removed.
Description of the Solution: Alfonso Iniguez is the first researcher to design an architecture that complies with the five principles of swarm intelligence: 1. Awareness: each member is aware of its available capabilities. 2. Autonomy: each member operates autonomously; this is essential to self-coordinate allocation of labor. 3. Solidarity: each member continuously volunteers its available capabilities until the mission is accomplished. 4. Expandability: members can be dynamically aggregated ad infinitum. 5. Resiliency: members can be removed while the system self-heals ad infinitum. The proposed solidarity cell architecture goes beyond flocking behavior and spectacular light shows. The technology will enable unmanned ground-air reconnaissance missions, precision farming, manufacturing robots, autonomous fleet management, and interplanetary exploration.
Westminster University, UK
Time : 10:45-11:30
Daphne Economou is a Senior Lecturer at the Department of Computer Science, Faculty of Science and Technology at the University of Westminster since January 2006. She has a PhD in Virtual Reality Systems Design from the Manchester Metropolitan University, a MA in Design for Interactive Media (Multimedia) from Middlesex University and she is a Senior Fellow of Higher Education Academy. She has published a long list of journal papers, peer-reviewed international conference papers and she served as program committee member in several international conferences. She has industrial experience as Human Factors Engineer at Sony Broadcast and Development Research Labs, Basingstoke UK and she is member of British Computer Society, IEEE and British Interactive Media Association (BIMA). She has been involved in the programme committee of several international conferences and she has organized and chaired workshops in IEEE international conferences related to serious games.
For the last four decades computer science researcher and industry have been working intensively to develop technology that would revolutionize the human experience interacting with computers, as well as with each other focusing their effort and hopes on virtual reality, augmented reality and mixed reality to realize this vision. Nowadays with the advances of head mounted displays, mobile and networking technology, wearables, smart environments, artificial intelligence and machine learning the required infrastructure falls in place to support the seamless human interaction in VR required to facilitate rich user experience. Domains of applications with great impact of VR span from education and training, culture, e-commerce, tourism, healthcare, entertainment and new forms of broadcasting. However, the new advances of this technology and the application requirements create new challenges in terms of interaction styles and design approaches that need to be adopted to ensure that users feel fully immersed in the computer simulated environment or the mixed reality environment they interact and fully engaged in the activities they participate. There is a need for a user centered design framework and design guidelines to support VR designers to create simulating environments and applications and to drive further VR technological development. The key note speech will present the state of the art of VR technology, it will discuss the virtual user experience challenges that derive from the current trends in VR and it will present some attempts of the serious games at Westminster Research Group (SG@W) to develop design guidelines for virtual human representation in VR and for the use of gamification as a design element to enhance user engagement in VR.
Anyline GmbH, Austria
Aniello R Patrone is a Computer Vision Engineer at Anyline GmbH, Vienna, Austria. A Computer Scientist by education, he completed Master Studies in Computer Vision in Naples, Italy, where he worked for the development of a marketed eye-tracker solution. His curiosity brought him to pursue a PhD in Computer Vision at the Computational Science Center of the University of Vienna, Austria. He has a proven record of publications in scientific journals and presentations at international conferences. Stepping out of the academic environment into the industry, he worked on video surveillance systems and recently joined the innovative company, Anyline GmbH.
The path of a research idea becoming a market product for customers´ use is filled with unexpected events and challenges. This presentation will allow you to have a look at the evolution of Machine Learning in the last ten years and at the technological shift from the use of external devices to mobile devices for document scanning purposes. The story the Document Scanner developed at Anyline GmbH in Vienna, Austria will start with an exemplary initial approach based on pure computer vision, analyzing limitations and real-life issues. It will continue with the next step, the deep learning approach in which some interesting CNN architectures will be presented and analyzed. Finally, a closer look will be taken to how to define image quality and how to implement it in a marketed product.