The Computational Learning and Motor Control Lab has its research focus in the areas of neural computation for sensorimotor control and learning. Neural Computation attempts to combine knowledge from biology with knowledge from physics and engineering in order to develop a more fundamental and formal understanding of information processing in complex systems. On the one hand, the goal is to discover new technologies by studying the principles of biological behavior and information processing, realizing the fact that even simple biological systems achieve a sensorimotor competence and computational self-organization that is far superior to artificial systems. On the other hand, the formalizing of biological information processing will equally contribute to the progress of our knowledge about the organization of nervous systems and, subsequently, to the development of new methods for medical and clinical diagnosis and treatment, including neural prosthetic devices.
One part of our research is concerned with learning in neural networks, statistical learning, and machine learning, since the ability of learning and self-organization seems to be among the most important prerequisites of autonomous systems.
Another part of the research program focuses on how movement can be generated, in particular in human-like systems with bodies, limbs, and eyes. This research touches the fields of control theory, nonlinear control, nonlinear dynamics, optimization theory, and reinforcement learning.
In a third research branch, we investigate human performance by measuring their movements in specially design behavioral tasks, and also by measuring their brain activities with neuroimaging techniques. Such research connects closely to work in Computational Neuroscience for motor control, and it includes abstract functional models of how brains may organize sensorimotor coordination.
A fourth part of the research in lab emphasizes studies with actual humanoid and biologically inspired robots. With this work, we are first interested in testing our learning and control theories with real physical systems in order to evaluate the robustness of our research results. Another challenge arises due to the scalability of our methods towards complex robot: our most advanced robot (similar to the pictures on the right) requires the nonlinear control of 30-40 physical degrees of freedom that need to be coordinated with visual, tactile, and acoustic perception. When attempting to synthesize behavior with such a machine, the shortcomings of state-of-the-art learning and control theories can be discovered and addressed in subsequent research. Finally, we also use humanoid robots for direct comparisons in behavioral experiments in which the robot is treated like a regular human subject.
At the CLMC lab, we have an amazing number of state-of-the-art experimental robots. These include a Sarcos Humanoid Robot, a Willow Garage PR2, a Sarcos Master Arm, a Sarcos Slave Arm, a Sarcos Active Vision Head, the Boston Dynamics Little Dog robot, a NAO small humanoid, and a Barrett WAM Arm/Hand. Pictures of these robots are on the right.
The Computational Learning and Motor Control Lab is also part of the Computational Neuroscience and Humanoid Robotics Department, located at the ATR Laboratories in Japan. Members of the lab have ample opportunities to visit the Japanese research facilities for short or extended times, as we have a lab-owned permanent apartment next to ATR.