Research
I'm broadly interested in robot learning, manipulation, and perception. Here are some of my research papers. Please feel free to reach out!
|
|
In-Context Imitation Learning via Next-Token Prediction
Letian Fu*, Huang Huang*, Gaurav Datta*, Lawrence Yunliang Chen*, William Chung-Ho Panitch, Fangchen Liu, Hui Li, Ken Goldberg
Preprint  
PDF /
Website /
Code /
Dataset
A robot policy that learns new tasks by prompting with robot trajectories and without any fine-tuning.
|
|
RoVi-Aug: Robot and Viewpoint Augmentation for Cross-Embodiment Robot Learning
Lawrence Yunliang Chen*, Chenfeng Xu*, Karthik Dharmarajan*, Zubair Irshad,
Richard Cheng, Kurt Keutzer, Masayoshi Tomizuka, Quan Vuong, Ken Goldberg
Conference on Robot Learning (CoRL), 2024   (Oral Presentation, 4.3%)
PDF /
Website /
Press /
Code
A data augmentation pipeline that uses diffusion models to generate novel robots and camera viewpoints.
|
|
Mirage: Cross-Embodiment Zero-Shot Policy Transfer with Cross-Painting
Lawrence Yunliang Chen*, Kush Hari*, Karthik Dharmarajan*, Chenfeng Xu, Quan Vuong, Ken Goldberg
Robotics: Science and Systems (RSS), 2024  
PDF /
Website /
Code
Zero-shot transfer a visuomotor policy trained on one robot to unseen robot embodiments by cross-painting the images.
|
|
Octo: An Open-Source Generalist Robot Policy
Dibya Ghosh*, Homer Walke*, Karl Pertsch*, Kevin Black*, Oier Mees*, Sudeep Dasari, Joey Hejna, Tobias Kreiman, Charles Xu,
Jianlan Luo, You Liang Tan, Lawrence Yunliang Chen, Pannag Sanketi, Quan Vuong, Ted Xiao, Dorsa Sadigh, Chelsea Finn, Sergey Levine
Robotics: Science and Systems (RSS), 2024  
PDF /
Website /
Code
An open-source generalist robot policy trained on a mixture of 25 datasets from the Open X-Embodiment dataset.
|
|
DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset
Alexander Khazatsky*, Karl Pertsch*, Suraj Nair, Ashwin Balakrishna, Sudeep Dasari, Siddharth Karamcheti, Soroush Nasiriany,
Mohan Kumar Srirama, Lawrence Yunliang Chen, Kirsty Ellis, Peter David Fagan, Joey Hejna, Masha Itkina, Marion Lepert,
Yecheng Jason Ma, Patrick Tree Miller, Jimmy Wu, Suneel Belkhale, Shivin Dass, Huy Ha, Arhan Jain, Abraham Lee, Youngwoon Lee,
Marius Memmel, Sungjae Park, Ilija Radosavovic, Kaiyuan Wang, Albert Zhan, Kevin Black, Cheng Chi, Kyle Beltran Hatch, Shan Lin,
Jingpei Lu, Jean Mercat, Abdul Rehman, Pannag R Sanketi, Archit Sharma, Cody Simpson, Quan Vuong, Homer Rich Walke, Blake Wulfe,
Ted Xiao, Jonathan Heewon Yang, Arefeh Yavary, Tony Z. Zhao, Christopher Agia, Rohan Baijal, Mateo Guaman Castro, Daphne Chen,
Qiuyu Chen, Trinity Chung, Jaimyn Drake, Ethan Paul Foster, Jensen Gao, David Antonio Herrera, Minho Heo, Kyle Hsu, Jiaheng Hu,
Donovon Jackson, Charlotte Le, Yunshuang Li, Kevin Lin, Roy Lin, Zehan Ma, Abhiram Maddukuri, Suvir Mirchandani, Daniel Morton,
Tony Nguyen, Abigail O'Neill, Rosario Scalise, Derick Seale, Victor Son, Stephen Tian, Emi Tran, Andrew E. Wang, Yilin Wu,
Annie Xie, Jingyun Yang, Patrick Yin, Yunchu Zhang, Osbert Bastani, Glen Berseth, Jeannette Bohg, Ken Goldberg, Abhinav Gupta,
Abhishek Gupta, Dinesh Jayaraman, Joseph J Lim, Jitendra Malik, Roberto Martín-Martín, Subramanian Ramamoorthy, Dorsa Sadigh,
Shuran Song, Jiajun Wu, Michael C. Yip, Yuke Zhu, Thomas Kollar, Sergey Levine, Chelsea Finn
Robotics: Science and Systems (RSS), 2024  
PDF /
Website /
Hardware Code /
Policy Learning Code
A dataset that contains 76k demonstration trajectories or 350 hours of interaction data, collected across 564 scenes and 84 tasks by 50 data collectors over the course of 12 months.
|
|
Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Open X-Embodiment Collaboration.
Abhishek Padalkar, Acorn Pooley, Ajinkya Jain, Alex Bewley, Alex Herzog, Alex Irpan, Alexander Khazatsky,
Anant Rai, Anikait Singh, Anthony Brohan, Antonin Raffin, Ayzaan Wahid, Ben Burgess-Limerick, Beomjoon Kim,
Bernhard Schölkopf, Brian Ichter, Cewu Lu, Charles Xu, Chelsea Finn, Chenfeng Xu, Cheng Chi, Chenguang Huang,
Christine Chan, Chuer Pan, Chuyuan Fu, Coline Devin, Deepak Pathak, Dhruv Shah, Dieter Büchler, Dmitry Kalashnikov,
Dorsa Sadigh, Edward Johns, Federico Ceola, Fei Xia, Freek Stulp, Gaoyue Zhou, Gaurav S. Sukhatme, Gautam Salhotra,
Ge Yan, Giulio Schiavi, Hao Su, Hao-Shu Fang, Haochen Shi, Heni Ben Amor, Henrik I Christensen, Hiroki Furuta,
Homer Walke, Hongjie Fang, Igor Mordatch, Ilija Radosavovic, Isabel Leal, Jacky Liang, Jaehyung Kim, Jan Schneider,
Jasmine Hsu, Jeannette Bohg, Jeffrey Bingham, Jiajun Wu, Jialin Wu, Jianlan Luo, Jiayuan Gu, Jie Tan, Jihoon Oh,
Jitendra Malik, Jonathan Tompson, Jonathan Yang, Joseph J. Lim, João Silvério, Junhyek Han, Kanishka Rao, Karl Pertsch,
Karol Hausman, Keegan Go, Keerthana Gopalakrishnan, Ken Goldberg, Kendra Byrne, Kenneth Oslund, Kento Kawaharazuka,
Kevin Zhang, Keyvan Majd, Krishan Rana, Krishnan Srinivasan, Lawrence Yunliang Chen, Lerrel Pinto, Liam Tan,
Lionel Ott, Lisa Lee, Masayoshi Tomizuka, Maximilian Du, Michael Ahn, Mingtong Zhang, Mingyu Ding, Mohan Kumar Srirama,
Mohit Sharma, Moo Jin Kim, Naoaki Kanazawa, Nicklas Hansen, Nicolas Heess, Nikhil J Joshi, Niko Suenderhauf,
Norman Di Palo, Nur Muhammad Mahi Shafiullah, Oier Mees, Oliver Kroemer, Pannag R Sanketi, Paul Wohlhart, Peng Xu,
Pierre Sermanet, Priya Sundaresan, Quan Vuong, Rafael Rafailov, Ran Tian, Ria Doshi, Roberto Martín-Martín,
Russell Mendonca, Rutav Shah, Ryan Hoque, Ryan Julian, Samuel Bustamante, Sean Kirmani, Sergey Levine, Sherry Moore,
Shikhar Bahl, Shivin Dass, Shuran Song, Sichun Xu, Siddhant Haldar, Simeon Adebola, Simon Guist, Soroush Nasiriany,
Stefan Schaal, Stefan Welker, Stephen Tian, Sudeep Dasari, Suneel Belkhale, Takayuki Osa, Tatsuya Harada, Tatsuya Matsushima,
Ted Xiao, Tianhe Yu, Tianli Ding, Todor Davchev, Tony Z. Zhao, Travis Armstrong, Trevor Darrell, Vidhi Jain, Vincent Vanhoucke,
Wei Zhan, Wenxuan Zhou, Wolfram Burgard, Xi Chen, Xiaolong Wang, Xinghao Zhu, Xuanlin Li, Yao Lu, Yevgen Chebotar, Yifan Zhou,
Yifeng Zhu, Ying Xu, Yixuan Wang, Yoonyoung Cho, Youngwoon Lee, Yuchen Cui, Yueh-hua Wu, Yujin Tang, Yuke Zhu, Yunzhu Li,
Yusuke Iwasawa, Yutaka Matsuo, Zhuo Xu, Zichen Jeff Cui
IEEE International Conference on Robotics and Automation (ICRA), 2024   (Best Paper Award)
PDF /
Website /
Blog Post /
Code /
Data
We introduce the Open X-Embodiment Dataset, the largest open-source real robot dataset to date. We train two models on the robotics data mixture: RT-1-X and RT-2-X.
|
|
Language Embedded Radiance Fields for Zero-Shot Task-Oriented Grasping
Adam Rashid*, Satvik Sharma*, Chung Min Kim, Justin Kerr, Lawrence Yunliang Chen, Angjoo Kanazawa, Ken Goldberg
Conference on Robot Learning (CoRL), 2023   (Oral Presentation, 6.6%) (Best Paper/Best Student Paper Finalist)
PDF /
Website /
Code /
Data
Given a natural language query, uses LERF based on CLIP and DINO features to perform zero-shot semantic grasping of object parts.
|
|
Semantic Mechanical Search with Large Vision and Language Models
Satvik Sharma*, Huang Huang*, Kaushik Shivakumar, Lawrence Yunliang Chen, Ryan Hoque, Brian Ichter, Ken Goldberg
Conference on Robot Learning (CoRL), 2023
PDF /
Website
Uses VLMs and LLMs to create semantic distributions that can be integrated into downstream mechanical search policies.
|
|
Bagging by Learning to Singulate Layers Using Interactive Perception
Lawrence Yunliang Chen, Baiyu Shi, Roy Lin, Daniel Seita, Ayah Ahmad, Richard Cheng, Thomas Kollar, David Held, Ken Goldberg
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023   (Best Industrial Robotics Research for Applications Finalist)
PDF /
Website
An algorithm for grasping a single layer of plastic bags and fabrics using purely visual feedback, and a much-improved bagging algorithm.
|
|
AutoBag: Learning to Open Plastic Bags and Insert Objects
Lawrence Yunliang Chen, Baiyu Shi, Daniel Seita, Richard Cheng, Thomas Kollar, Ken Goldberg
IEEE International Conference on Robotics and Automation (ICRA), 2023
PDF /
Website
A semantic representation of plastic bags and an algorithm for a bimanual robot to open a plastic bag from unstructured configurations, to insert object(s) into it, and then to lift the bag.
|
|
Fleet-DAgger: Interactive Robot Fleet Learning with Scalable Human Supervision
Ryan Hoque, Lawrence Yunliang Chen, Satvik Sharma, Karthik Dharmarajan, Brijen Thananjeyan, Pieter Abbeel, Ken Goldberg
Conference on Robot Learning (CoRL), 2022   (Oral Presentation, 6.5%)
PDF /
Website /
Code
A formalism, several new algorithms, and a benchmark for interactive fleet learning: interactive learning with multiple robots and multiple humans.
|
|
Efficiently Learning Single-Arm Fling Motions to Smooth Garments
Lawrence Yunliang Chen*, Huang Huang*, Ellen Novoseller, Daniel Seita, Jeffrey Ichnowski, Michael Laskey, Richard Cheng, Thomas Kollar, Ken Goldberg
The International Symposium on Robotics Research (ISRR), 2022
PDF /
Website
Given a new garment, quickly learn a fling action that can effectively smooth the garment with only one robot arm.
|
|
Optimal Shelf Arrangement to Minimize Robot Retrieval Time
Lawrence Yunliang Chen, Huang Huang, Michael Danielczuk, Jeffrey Ichnowski, Ken Goldberg
IEEE International Conference on Automation Science and Engineering (CASE), 2022   (Best Student Paper Finalist)
PDF /
Website
Optimize the arrangement of objects on a shelf to make them easier to retrieve and search.
|
|
Real2Sim2Real: Self-Supervised Learning of Physical Single-Step Dynamic Actions for Planar Robot Casting
Vincent Lim*, Huang Huang*, Lawrence Yunliang Chen, Jonathan Wang, Jeffrey Ichnowski, Daniel Seita, Michael Laskey, Ken Goldberg
IEEE International Conference on Robotics and Automation (ICRA), 2022
PDF /
Website /
Video /
Press
Using Real2Sim2Real to learn to manipulate a free-end cable so that the endpoint can accurately reach desired target positions on a planar surface.
|
|
A Multi-Chamber Smart Suction Cup for Adaptive Gripping and Haptic Exploration
Tae Myung Huh, Kate Sanders, Michael Danielczuk, Monica Li, Yunliang Chen, Ken Goldberg, Hannah S. Stuart
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021
PDF /
Video
A new suction cup design that contains multiple chambers for gripping and haptic exploration.
|
|
Understanding and Mitigating Annotation Bias in Facial Expression Recognition
Yunliang Chen, Jungseock Joo
IEEE/CVF International Conference on Computer Vision (ICCV), 2021
PDF (with appendix) /
Code
Analysis of annotation bias for many large public facial expression recognition datasets.
|
Grassi Fellowship, 2023-24
Berkeley IEOR Department's endowed PhD fellowship, one awardee per year
|
Katta G. Murty Prize for Best Paper in Optimization, 2023
For the paper "Optimal Shelf Arrangement and Rearrangement to Minimize Robot Retrieval Time," IEOR Department, UC Berkeley
|
NSF Graduate Research Fellowship, 2022-25
NSF Graduate Research Fellowship Program (GRFP)
|
Scholarship for the International Elite Summer School in Robotics and Entrepreneurship, 2022
Funded by the Danish Ministry of Higher Education and Science, Odense Municipality, the Innovation Centre Denmark, the Novo Nordisk Foundation, and private partners
|
Chiang Fellowship for Graduate Scholars in Manufacturing and Engineering, 2020-21
UC Berkeley IEOR Departmental Fellowship
|
|