Research

 

The key areas of my research are Artificial Intelligence, Normative Multiagent Systems, Virtual Worlds, Motion Capture, Virtual Heritage, Human Computer Interaction and Electronic Commerce.


I have developed the concept of Virtual Institutions, including respective formal representations, supporting algorithms and technological solutions in my PhD thesis “Virtual Institutions” in 2007.

This is a fundamental technology that is being used in many research projects I’m involved with in Australia and oversees.

Current Projects:


Uruk 3000 B.C.

The aim of the project is to recreate the ancient city of Uruk from the period around 3000 B.C. in the Virtual World of Second Life letting the history students experience how it looked like and how its citizens behaved in the past. The virtual world provides a unique collaborative environment for history experts, archaeologists, anthropologists, designers and programmers to meet, share their knowledge and work together on making the city and the behaviour of its virtual population historically authentic. One of the unique features of this project in contrast to the majority of other virtual environments existing in Second Life is the availability of so-called virtual humans (or autonomous agents). The agents are graphical characters (avatars) that “live” in the virtual world and reenact the typical daily activities of the ancient people. These agents are controlled by the Artificial Intelligence engine that is based on the Virtual Institutions technology. In Virtual Institutions the agents are capable of playing different roles, communicating and interacting with other agents or humans, synchronising their actions with other participants and collaborating with other agents or humans.

Visit the Uruk Project website here.


Robot Teleoperation via Motion Capture

Tele-operation of humanoid robots has enormous potential for use in rescue situations for which it is to dangerous to send a human being, but where the dexterity of a legged humanoid body shape is required. Examples of such situations include fire rescue, nuclear accidents or chemical spills. One of the challenges in controlling a humanoid robot lies in designing an appropriate user interface. Humanoid robots typically have over 20 degrees of freedom and each motor needs to be controlled simultaniously. This is a very difficult task for traditional user interfaces such as joysticks, keyboards and mice. In this work we propose using real-time motion capture for robot tele-operation. Motion capture is the term to describe the recognition and data logging of human movement and is a popular technique employed in computer animation. Current motion capture equipment allows to track human movement with unprecedented precision and accuracy and is suitable for real-time control of humanoid robots. We show examples of robot tele-operation via motion capture, discuss existing problems and our current progress, as well as the potential for using this technology in emergency situations.



Motion Capture as a control interface

This project investigates novel ways of human-computer interaction with a particular focus on using motion capture for virtual worlds and video games. We explore the applicability of various motion capture devices (ranging from inertial motion capture suits to video cameras and depth sensors) to use them as a control interface for navigating a virtual world as well as interacting with objects and other participants. We have developed technologies for translating real-time motion of the human user onto an arbitrary virtual world avatar, as well as technological facilities for tracking user gestures for controlling locomotion and flying of an avatar in a virtual environment.



Real-Time Motion Streaming for Games

The future of interactive video games is most definitely going to be shaped by full body motion capture.

At last, you will have to stand up from your couch and do some proper exercise. The skills you learn in the game will not be wasted, but can be directly applied in the real world... and now everybody who has some basic programming skills and understands physics can become a game developer. In this project we use motion capture for developing the basketball video game in the virtual world of Second Life. There is no need to use any control devices, the experience (which is proper multiuser 3D) is fully controlled by every part of your body. The body movement is directly translated onto the avatar movement and interactions with the ball are supported through recognising whether the user is holding, dribbling or throwing the ball based on the physical characteristics of the corresponding body parts.