Hi! My name is Fanjun (Frank) Bu. I am a Ph.D. student at Cornell University. My interest is in assistive robot, human-robot collaboration,
and robot learning.
(Personal Robotics Lab) I worked on the Food Manipulation for Assisted Feeding project under the supervision of Tapomayukh Bhattacharjee and Prof. Siddhartha Srinivasa. Specificallly, I surveyed robot’s impact in social settings across the field of sociology, psychology, and computer science. Based on the survey, I investigated the timing of robotics assistive feeding in a social setup and used TCN to predict appropriate feeding time window with minimum interruption to ongoing social dynamics at the dinner table.
(Intuitive Computing Lab) As robots perform manipulation tasks and interact with objects, it is probable that they accidentally drop objects (e.g., inadequate grasp of an unfamiliar object) that subsequently bounce out of their visual fields. To enable robots to recover from such errors, we draw on the concept of object permanence—objects remain existed even when they are not being sensed (e.g., seen) directly. In particular, we developed a multimodal neural network model—combining a partially observed trajectory and audio resulted from the drop impact—to predict the full bounce trajectory and the end location of a dropped object.
[YouTube] (Paper submitted to IROS)
(HoneyLab) In real world, humans often learn from temporally smooth data, that is, the samples are correlated across nearby points in time. In this project, we first investigated the effects of smoothness in training data on incremental learning in feedforward nets. Then, we demonstrated that two simple brain-inspired mechanisms—leaky memory in activation units and memory-gating—can enable networks to exploit the redundancies in smooth data. Finally, we showed how these brain-inspired mechanisms altered the internal representations learned by the networks.
My contribution in the project includes writing python script to test the effect of temporally smooth data using MNIST \& Fashion-MNIST dataset, implementing bio-inspired mechanisms, and result analysis.
(EPFL LASA) During my internship at LASA(Learning Algorithms and Systems Laboratory) at EPFL, I designed the computer vision system for benchmark project, Skill Acquisition in Humans and Robots (SAHR).
Humans are sensitive to auditory sequence stimuli. Well-studied by many experiments, humans’ implicit learning of dynamic auditory sequences is truly profound. However, do such ability exit in vison? Do people have the ability to learn dynamic visual sequences implicitly? With computer generated stimuli, I investigate in how humans study dynamic visual sequence using Amazon MTurk, and our study fills the open gap in the field of visual sequence learning.