Site Search  

CurrentEvents » Distinguished Lecture Day of Robotics 2010

USC Distinguished Lecture Day of Robotics

April 1, 2010, at the University of Southern California, HNB 107

This symposium is organized with the generous help of the Robotics Science and Systems Conference 2010 Program Committee

TimeSpeaker Information
9:00Sanjiv Singh (CMU)
Navigating with Range
9:35Cyrill Stachniss (Univ. Freiburg)
Hierarchical Optimization on Manifolds for Online 2D and 3D Mapping
10:10David Hsu (National University of Singapore)
A POMDP Approach to Robot Motion Planning under Uncertainty
10:45-11:00Coffee Break
11:00Danica Kragic (Royal Institute of Technology (KTH))
Active Vision for Detecting, Fixating, Manipulating Objects and Learning of Human Actions
11:35Jan Peters (Max Planck Institute for Biological Cybernetics)
Motor Skill Learning for Robotics
12:10-13:15Lunch Break
13:15Jana Kosecka (George Mason University)
Semantic Segmentation of Street Scenes
13:50Alin Albu-Schaeffer (German Aerospace Center)
Soft Robotics concepts for robots interacting with humans and their environment
14:25Katsu Yamane (Disney Research and CMU)
Towards Human-like Motions of Humanoid Robots
15:00Coffee Break
15:15Russ Tedrake (MIT)
Feedback design for machines that exploit their dynamics
15:50Stefan Williams (Australian Centre for Field Robotics)
Simultaneous Localisation and Mapping and Dense Stereoscopic Seafloor Reconstruction using an AUV
16:25James Kuffner (CMU)
Motion Planning for Constrained Manipulation Tasks
17:00End of Meeting

Navigating with Range
Sanjiv Singh
Carnegie Mellon University

Typically SLAM systems use sensors that provide bearing and range to distinct features in the environment. One variant looks at the case of simply using bearing (as with features tracked in image sequences) to map and localize. We are interested in a case that has not been well studied-- what if agents could only sense range to features in the environment and to other agents? Looking closely at this problem has brought up an interesting agenda that spans topics in SLAM, sensor networks and coordinated control of multiple robots. It also fulfills a need for localization in environments in which ambient conditions are anathema to optical methods. I will discuss some of our recent results and point to open problems.

Biography:
Sanjiv Singh is currently a Research Professor at the Robotics Institute, Carnegie Mellon University. His recent work has two main themes: perception in natural and dynamic environments and multi-agent coordination. Currently he leads efforts in UAVs operating in cluttered, near-earth environments, agricultural robotics, and in coordinating teams of robots for a variety of tasks. Dr. Singh is the Editor In Chief of the Journal of Field Robotics.


Hierarchical Optimization on Manifolds for Online 2D and 3D Mapping
Cyrill Stachniss
University of Freiburg

I will present a new hierarchical optimization solution to the graph-based simultaneous localization and mapping problem. During online mapping, the approach corrects only the coarse structure of the scene and not the overall map. In this way, only updates for the parts of the map that need to be considered for making data associations are carried out. The hierarchical approach provides accurate non-linear map estimates while being highly efficient. The error minimization approach exploits the manifold structure of the underlying space. In this way, it avoids singularities in the state space parameterization. The overall approach is accurate, efficient, designed for online operation, overcomes singularities, provides a hierarchical representation, and outperforms a series of state-of-the-art methods.

Biography:
Cyrill Stachniss is with the University of Freiburg in Germany. Since October 2009, he is the deputy head of the Autonomous Intelligent Systems lab focusing on mobile robotics. Before that, he was working as an academic advisor and he finished his habilitation in 2009. He spent his post-doc time at Freiburg University with Wolfram Burgard and partially at ETH Zurich with Roland Siegwart. Since 2008, he is an associate editor of the IEEE Transactions on Robotics.

In his research, Cyrill Stachniss focuses on probabilistic techniques in the context of mobile robotics. He interests cover autonomous exploration in combination with solutions to the simultaneous localization and mapping problem. He is also interested in classification and learning approaches including scene analysis as well as in computer controlled cars, robotic vision, and related navigation problems.


A POMDP Approach to Robot Motion Planning under Uncertainty
David Hsu
National University of Singapore

Motion planning in uncertain and dynamic environments is critical for reliable operation of autonomous robots. Partially observable Markov decision processes (POMDPs) provide a powerful framework for such planning tasks and have been successfully applied to several moderately complex robotic tasks, including navigation, manipulation, and target tracking. The challenge now is to scale up POMDP planning algorithms and handle more complex, realistic tasks. I will outline some ideas aimed at overcoming two major obstacles to the efficiency of POMDP planning: ``curse of dimensionality and ``curse of history. Our main objective is to show that using these ideas---along with others---POMDP algorithms can be used successfully for motion planning under uncertainty for robotic tasks with a large number of states or a long time horizon. I will also highlight some challenges ahead.

Biography:
David Hsu is currently an associate professor of computer science at the National University of Singapore and a member of NUS Graduate School for Integrative Sciences & Engineering (NGS). His research spans robotics, computational biology, and geometric computation. His current interest includes robot motion planning under uncertainty.

He received B.Sc. in computer science & mathematics from the University of British Columbia, Canada and Ph.D. in computer science from Stanford University, USA. After leaving Stanford, he worked at Compaq Computer Corp.'s Cambridge Research Laboratory and the University of North Carolina at Chapel Hill. At the National University of Singapore, he held the Sung Kah Kay Assistant Professorship and was a Fellow of the Singapore-MIT Alliance.


Active Vision for Detecting, Fixating, Manipulating Objects and Learning of Human Actions
Danica Kragic
Royal Institute of Technology (KTH)

The ability to autonomously acquire new knowledge through interaction with the environment is one of the major research goals in the field of robotics. The knowledge can be acquired only if suitable perception-action capabilities are present. In other words, a robotic system has to be able to detect, attend to and manipulate objects in the environment. In the first part of the talk, we present the results of our longterm work in the area of vision based sensing and control. The work on finding, attending, recognizing and manipulating objects in domestic environments is discussed. More precisely, we present a stereo based active vision system framework where aspects of Top-down and Bottom-up attention and foveated attention are put into focus and demonstrate how the system can be utilized for object grasping.

The second part of the talk presents our work on the visual analysis of human manipulation actions which are of interest for e.g. human-robot interaction applications where a robot learns how to perform a task by watching a human. A method for classifying manipulation actions in the context of the objects manipulated, and classifying objects in the context of the actions used to manipulate them is presented. The action-object correlation over time is then modeled using conditional random fields. Experimental comparison shows improvement in classification rate when the action-object correlation is taken into account, compared to separate classification of manipulation actions and manipulated objects.


Motor Skill Learning for Robotics
Jan Peters
Max Planck Institute for Biological Cybernetics

Autonomous robots that can assist humans in situations of daily life have been a long standing vision of robotics, artificial intelligence, and cognitive sciences. A first step towards this goal is to create robots that can learn tasks triggered by environmental context or higher level instruction. However, learning techniques have yet to live up to this promise as only few methods manage to scale to high-dimensional manipulator or humanoid robots. In this talk, we investigate a general framework suitable for learning motor skills in robotics which is based on the principles behind many analytical robotics approaches. It involves generating a representation of motor skills by parameterized motor primitive policies acting as building blocks of movement generation, and a learned task execution module that transforms these movements into motor commands. We discuss learning on three different levels of abstraction, i.e., learning for accurate control is needed to execute, learning of motor primitives is needed to acquire simple movements, and learning of the task-dependent "hyperparameters" of these motor primitives allows learning complex tasks. We discuss task-appropriate learning approaches for imitation learning, model learning and reinforcement learning for robots with many degrees of freedom. Empirical evaluations on a several robot systems illustrate the effectiveness and applicability to learning control on an anthropomorphic robot arm.

Biography:
Jan Peters is a senior research scientist and heads the Robot Learning Lab (RoLL) at the Max Planck Institute for Biological Cybernetics (MPI) in Tuebingen, Germany. He graduated from University of Southern California (USC) with a Ph.D. in Computer Science. He holds two German M.S. degrees in Informatics and in Electrical Engineering (from Hagen University and Munich University of Technology) and two M.S. degrees in Computer Science and Mechanical Engineering from USC. Jan Peters has been a visiting researcher at the Department of Robotics at the German Aerospace Research Center (DLR) in Oberpfaffenhofen, Germany, at Siemens Advanced Engineering (SAE) in Singapore, at the National University of Singapore (NUS), and at the Department of Humanoid Robotics and Computational Neuroscience at the Advanded Telecommunication Research (ATR) Center in Kyoto, Japan. His research interests include robotics, nonlinear control, machine learning, reinforcement learning, and motor skill learning.


Semantic Segmentation of Street Scenes
Jana Kosecka
George Mason University

We present a novel approach for image semantic segmentation of street scenes into coherent regions, while simultaneously categorizing each region as one of the predefined categories representing commonly encountered object and background classes. We formulate the segmentation on small blob-based superpixels and exploit a visual vocabulary tree as an intermediate image representation. The main novelty of our approach is the introduction of an explicit model of spatial co-occurrence of visual words associated with super-pixels and the utilization of appearance, geometry and contextual cues in a probabilistic framework. We demonstrate how individual cues contribute towards global segmentation accuracy and how their combination yields superior performance to the best known method on the challenging benchmark dataset, which exhibits diversity of street scenes with varying viewpoints, large number of categories, captured in daylight and dusk.


Soft Robotics concepts for robots interacting with humans and their environment
Alin Albu-Schaeffer
German Aerospace Center

Biography:
Alin Albu-Schäffer received his Diploma Degree in electrical engineering from the technical University of Timisoara, Romania in 1993 and the Ph.D. degree in Control Systems in 2002 from the Technical University of Munich, Germany. Since 1995, he has been with the Institute of Robotics and Mechatronics at the German Aerospace Center (DLR). Since 2009, he is the head of the mechatronic components and systems department. His research interests include robot modelling and control, nonlinear control, flexible joint robots, impedance and force control, physical human-robot interaction.


Towards Human-like Motions of Humanoid Robots
Katsu Yamane
Disney Research and Carnegie Mellon University

In spite of the recent progress in humanoid motion synthesis and control, realizing human-like, stylistic motions is still a challenging problem. In this talk, I will introduce our effort towards efficient and robust algorithms for programming and controlling humanoid robots that leverage the knowledge on different aspects of human motions from joint trajectories to somatosensory reflex.

Biography:
Dr. Katsu Yamane is currently a Senior Research Scientist at Disney Research, Pittsburgh and an Adjunct Associate Professor at Carnegie Mellon University. His research interests include humanoid robot control, character animation, and human motion analysis.


Provably stable robots that exploit nonlinear dynamics
Russ Tedrake
Massachusetts Institute of Technology

As demonstrated by concepts like passive dynamic walking, one of the grand challenges for robotics is to build systems which can exploit their natural dynamics in order to achieve superior efficiency and agility. Traditionally, there has been a gap between systematic and formal methods for motion planning and feedback design and the examples of high-performance, very dynamic robots. In this talk, I will describe feedback motion planning algorithms which can bridge this gap - achieving high performance with general and rigorous control design tools. I will give examples from walking robots and robots that fly like a bird.

Biography:
Russ is the X Consortium Associate Professor of Electrical Engineering and Computer Science at MIT, and a member of the Computer Science and Artificial Intelligence Lab. He is a recipient of the NSF CAREER Award, the MIT Jerome Saltzer Award for undergraduate teaching, the DARPA Young Faculty Award, and was named a Microsoft Research New Faculty Fellow. Russ received his B.S.E. in Computer Engineering from the University of Michigan, Ann Arbor, in 1999, and his Ph.D. in Electrical Engineering and Computer Science from MIT in 2004, working with Sebastian Seung. After graduation, he joined the MIT Brain and Cognitive Sciences Department as a Postdoctoral Associate. During his education, he has also spent time at Microsoft, Microsoft Research, and the Santa Fe Institute.


Simultaneous Localisation and Mapping and Dense Stereoscopic Seafloor Reconstruction using an AUV
Stefan Williams
Australian Centre for Field Robotics

This talk will review current work being undertaken at the University of Sydney's Australian Centre for Field Robotics on efficient, stereo based Simultaneous Localisation and Mapping and dense scene reconstruction suitable for creating detailed maps of seafloor survey sites using an Autonomous Underwater Vehicle (AUV). Techniques for three dimensional scene reconstruction, visualisation, novelty detection and classification are also discussed. A suite of tools have been developed for creating and visualising 3D, texture mapped models of the seafloor, thereby providing marine scientists with a method for assessing the spatial distribution of various organisms of interest. The AUV Sirius has been operated on a number of cruises in 2007 through 2009 as part of the establishment of an AUV Facility associated with Australia's Integrated Marine Observing System (IMOS). Outcomes of these cruises are described, illustrating how advances in SLAM techniques are facilitating the construction of very large scale seafloor maps and the extraction of scientifically useful data.

Biography:
Dr. Stefan B. Williams is a Senior Lecturer in the University of Sydney's School of Aerospace, Mechanical and Mechatronic Engineering. He is a member of the Australian Centre for Field Robotics where he leads the Marine Robotics group. He is also the head of Australia's Integrated Marine Observing System AUV Facility. His research interests are focused on Simultaneous Localisation and Mapping in unstructured underwater environments, with a particular emphasis on fielding systems in support of marine science applications. He received his PhD from the University of Sydney in 2002 and completed a Bachelor of Applied Science with first class honours in 1997 at the University of Waterloo, Canada.


Motion Planning for Constrained Manipulation Tasks
James Kuffner
Carnegie Mellon University

One of the grand challenges in artificial intelligence is to create truly "general-purpose" autonomous robots for home, hospital, and office environments. This talk will discuss some of the challenges of motion autonomy and present an overview of some practical automatic motion planning methods for object grasping and manipulation under obstacle and other generalized task constraints. Experimental results on several humanoid platforms around the world will be shown, along with some new efforts in mobile manipulation. Finally, the long-term prospects for the future development of robot autonomy as it relates to search-based AI will be discussed.

Biography:
James Kuffner is an Adjunct Associate Professor at the Robotics Institute, School of Computer Science, Carnegie Mellon University. He received a B.S. and M.S. in Computer Science from Stanford University in 1993 and 1995, and a Ph.D. from the Stanford University Dept. of Computer Science Robotics Laboratory in 1999. He was a Japan Society for the Promotion of Science (JSPS) Postdoctoral Research Fellow at the University of Tokyo, Dept. of Mechano-Informatics Robotics Laboratory from 1999 to 2001. He joined the faculty at CMU in 2002. His research interests include robotics, motion planning, computational geometry, and computer graphics and animation. He received the Okawa Foundation Award for Young Researchers in 2007.


Location of Lecture Hall HNB 100

Designed by: Nerses Ohanyan & Jan Peters
Page last modified on March 31, 2010, at 08:31 AM