See also


Localization and mapping is fundamental to design of autonomous systems. For recently the emphasis has been on design of a multi-level graphical model that can be used to represent the world in terms of objects, events, and agents. The research is focused on design of not only a suite of detectors and relational estimators, but also a methodology for integration of prior models and knowledge into the estimation process. In an indoor setting we are building semantic models across rooms, places and objects to enable organization and search for objects. In an outdoor setting we are building models for the road layout and detection of other road users to allow for construction of a dyanmic scene model. The underlying graphical estimation and learning is a foundation for a broad

Autonomous Vehicle Laboratory

The Autonomous Vehicle Laboratory is studying level 4-5 autonomous cars for urban transportation. The laboratory collaborates with a number of industrial companies to field real systems. We study perception, systems integration, local mapping, adpative planning, and interaction with other road-users such as pedestrians and bicyclists. The research is performed in collaboration with UCSD facilities in a trial for automated logistic services across campus.

Cognitive Robotics

The next challenge in mobile robotics is to endow the systems with cognitive capabilities. Cognition implies competencies to represent knowledge about the external world in terms of objects, events, and agents, to autonomously be able to acquire such knowledge and to reason about the world to facilitate action generation. The research is focused on several aspect of cognitive robots such as, recognition of objects and activities, and dialog generation to enable interaction with humans as part of learning, and execution of tasks.

Sensor Based Manipulation

Traditionally robot manipulators have achieved their accuracy through use of excellent mechanisms and strong models for control. This has enabled design of robots with accuracies below 1 mm. To achieve repeatable accuracies better than 0.1mm there is a need to integrate sensors into the outer feedback loop. A number of different sensory modalities can be utilized such as force-torque, tactile, and computer vision. We are particularly interested in vision and range data for non-contact sensing and use of force-torque for control as part of contact configuration. The objective is here to integrate multiple models into hybrid dynamic control models that optimize accuracy and robustness.

Human Robot Interaction

The acceptance of robots by non-experts is essential to wide adoptable and utilization of robot systems. The Human-Robot Interaction is essential to such adoption. This requires consideration of all aspects of HRI from design, over social interaction to physical interaction. In our research we focus in particular on physical HRI and the interplay between design and interaction.