Recent developments in sensor technology have made it feasible to equip mobile robots with high-fidelity sensors and deploy them in several real-world applications. The ability to accurately sense and model the environment is however still missing. The widespread deployment of mobile robots is hence feasible only when they are able to autonomously plan actions that enable them to learn environmental models, detect changes and adapt the learned models in response to these changes. This talk will present two examples of such learning and planning for autonomous operation, using visual input.
First, I shall describe work in the legged league of the robot soccer framework. Here, the robot autonomously models action outcomes and plans actions sequences to learn models for color distributions. In addition, the robot is able to detect illumination changes and adapt the learned models to operate autonomously over a range of illuminations. All algorithms are fully implemented and tested to run in real-time on a team of legged robots.
Next, I shall describe work on the EU Cognitive Systems (CoSy) initiative, where we enable robots to autonomously tailor their visual processing to the task on hand. The visual processing problem is posed as a Partially Observable Markov Decision Process (POMDP). The robot autonomously learns probabilistic models for action outcomes, and plans its actions by trading off between the plan execution time and plan reliability. A hierarchical structure is introduced to make planning with POMDPs feasible. The algorithms are tested in a scenario where humans and robots converse about and manipulate objects on a table-top.
Dr Mohan Sridharan
Mohan Sridharan got his Ph.D. from The University of Texas at Austin in August 2007. Since then he has been a Research Fellow at University of Birmingham, working on the CoSy project. His research interests include robot (and computer) vision, machine (and reinforcement) learning and autonomous systems.