About me

I’m a second-year M.S. student at UT Austin advised by Prof. Philipp Krähenbühl and Prof. Yuke Zhu. Before coming to Austin, I spent four wonderful years at UC Berkeley as an undergraduate and a researcher at BAIR, advised by Prof. Trevor Darrell and Prof. Jitendra Malik.

My research lies at the intersection of vision, robotics and machine learning. In recent years, I mainly work on prediction and planning in autonomous driving.

News

  • I graduated from UC Berkeley with Highest Honors in Applied Mathematics, Honors in Computer Science and Highest Distinction in General Scholarship.

Projects

Fighting Copycat Agents in Behavioral Cloning from Multiple Observations

Chuan Wen*, Jierui Lin*, Trevor Darrell, Dinesh Jayaraman, Yang Gao
NeurIPS 2020
paper talk

copycat

Here is an example of the Copycat problem in an autonomous driving scenario: The vehicle waits at the red light and starts to accelerate when the light turns green. A copycat policy which simply replays its previous action will predict all but one actions correctly in the offline dataset while being useless in closed-loop evaluation.

To combat this copycat problem, we propose an adversarial approach to learn a feature representation that removes excess information about the previous expert action nuisance correlate, while retaining the information necessary to predict the next action.

Keyframe-Focused Visual Imitation Learning

Chuan Wen*, Jierui Lin*, Jianing Qian, Yang Gao, Dinesh Jayaraman
ICML 2021
paper

Imitation learning trains control policies by mimicking pre-recorded expert demonstrations. In partially observable settings, imitation policies must rely on observation histories, but many seemingly paradoxical results show better performance for policies that only access the most recent observation.

Recent solutions ranging from causal graph learning to deep information bottlenecks have shown promising results, but failed to scale to realistic settings such as visual imitation. We propose a solution that outperforms these prior approaches by upweighting demonstration keyframes corresponding to expert action changepoints. This simple approach easily scales to complex visual imitation settings.

3D Shape Reconstruction from Free-Hand Sketches

Jiayun Wang, Jierui Lin, Qian Yu, Runtao Liu, Yubei Chen, Stella X. Yu
Preprint
paper

alt text

3d_result

Sketches are the most abstract 2D representations of real-world objects. Although a sketch usually has geometrical distortion and lacks visual cues, humans can effortlessly envision a 3D object from it.

We pioneer to study this task and aim to enhance the power of sketches in 3D-related applications such as interactive design and VR/AR games. Further, we propose a novel sketch-based 3D reconstruction framework (shown above).

Learning a Perception-Logic Network for Unsupervised Scene Conditioned Driving Behavior

Jierui Lin*, Yifei Xing*, Huazhe Xu, Trevor Darrell, Yang Gao

perception-logic

Human driving behavior can not be explained solely by traffic rules, rather, people take a lot of scene factors into account. Autonomous driving system should obtain those behaviors as well, since it not only ensures natural interactions with non-autonomous divers and pedestrians, but also improves driving safeness.

We propose to leverage only demonstrative driving data to unsupervisely learn those scene factors and combine the learned scene factors with a logic network, to finally output the driving behaviors.