Our research on cognitive robotics particularly aims at synthetically understanding the computational mechanisms of human cognitive functions such as perception, action, and attention, which are self-organized through learning continuous sensory-motor experiences. To that end, we develop a theory based on experimental and hypothetical findings in cognitive neuroscience, developmental psychology, and other related research fields. The theory is embodied as a system by developing a computational model. The computational model is implemented in a robot and the theory is tested by conducting synthetic robotic experiments on learning cognitive tasks. As a computational principle or theory for the brain, we particularly focus on predictive information processing, called free-energy principle or predictive coding. For developing computational models, we use neural networks (deep learning techniques). We have proposed the so-called stochastic continuous-time recurrent neural network (S-CTRNN) that can learn to predict the mean and uncertainty of observations and stochastic multiple timescale RNN (S-MTRNN) in which a multiple timescale property is introduced in its neural dynamics. Examples of cognitive functions that we have studied include reaching, imitation, self-other discrimination, language, and communication.
As a future direction, in addition to extending our synthetic cognitive robotics studies, we also plan to conduct studies on analysis of whole-cortical electrocorticograms (ECoG) from non-human primates with deep learning techniques and on integrated information theory (IIT) for consciousness by enhancing collaboration with researchers in the field of cognitive neuroscience and consciousness science. Through the studies based on both the synthetic and analytical approaches, we aim to reveal the great mystery of human cognitive functions and intelligence.