I'm a roboticist on the Tesla Optimus team, where I get to teach robots how to navigate and interact with the real world—basically, making sci-fi a reality!
I earned my Ph.D. in Machine Learning at the University of Maryland under the guidance of Prof. Tom Goldstein. My research focused on making AI systems more robust and reliable in challenging scenarios. Along the way, I interned at Adobe, Waymo, and Meta. Before diving into AI, I was a business major at Temple University and even worked as an actuary—until I realized crunching numbers wasn't nearly as exciting as building intelligent machines!
[Jan 2024] Our paper on the better instruction finetuning is accepted by the ICLR 2024.
[Jan 2023] Our paper on the implicity bias of neural network is accepted by the ICLR 2023.
[May 2022] Our paper on certified neural network watermark is accepted by ICML 2022.
[Mar 2022] Our paper studying reproducibility of neural network is accepted by CVPR 2022.
[Aug 2021] Our paper on data poisoning through adversarial example is accepted by NeurIPS 2021.
[Jan 2021] Our paper on hardware aware quantization is accepted by ICLR 2021.
[Aug 2020] Our paper on certified object detection through randomized smoothing is accepted by NeurIPS 2020.
[Aug 2020] Our paper on certifying strategyproof auction network is accepted by NeurIPS 2020.
[Jan 2020] Our paper on certified defense against patch attack is accepted by ICLR 2020.
Neel Jain, Ping-yeh Chiang, Yuxin Wen, John Kirchenbauer, Hong-Min Chu, Gowthami Somepalli, Brian R. Bartoldson, Bhavya Kailkhura, Avi Schwarzschild, Aniruddha Saha, Micah Goldblum, Jonas Geiping, Tom Goldstein, "NEFTune: Noisy Embeddings Improve Instruction Finetuning", in the proceeding of the International Conference on Learning Representations (ICLR), 2024. [paper]
Ping-yeh Chiang, Yipin Zhou, Omid Poursaeed, Satya Narayan Shukla, Tom Goldstein, Ser-Nam Lim, "Universal Pyramid Adversarial Training for Improved ViT Performance", Preprint.
Ping-yeh Chiang, Renkun Ni, David Yu Miller, Arpit Bansal, Jonas Geiping, Micah Goldblum, Tom Goldstein, "Loss Landscapes are All You Need: Neural Network Generalization Can Be Explained Without the Implicit Bias of Gradient Descent", in the proceeding of the International Conference on Learning Representations (ICLR), 2022. Spotlight [paper][code]
Gowthami Somepalli, Liam Fowl, Arpit Bansal, Ping Yeh-Chiang, Yehuda Dar, Richard Baraniuk, Micah Goldblum, Tom Goldstein, "Can neural nets learn the same model twice? Investigating reproducibility and double descent from the decision boundary perspective", in the proceeding of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. Oral [paper][code]
Arpit Amit Bansal*, Ping-yeh Chiang*, Michael Curry, Hossein Souri, Rama Chellappa, John P Dickerson, Rajiv Jain, Tom Goldstein, "Certified Watermarks for Neural Networks", in the proceeding of the International Conference on Machine Learning (ICML), 2022. Spotlight [paper]
Liam Fowl*, Micah Goldblum*, Ping-yeh Chiang*, Jonas Geiping, Wojtek Czaja, Tom Goldstein, "Adversarial Examples Make Strong Poisons", in the proceeding of the Conference on Neural Information Processing Systems (NeurIPS), 2021. [paper]
Ping-yeh Chiang, Michael J. Curry, Ahmed Abdelkader, Aounon Kumar, John Dickerson, Tom Goldstein, "Detection as Regression: Certified Object Detection by Median Smoothing", in the proceeding of the Conference on Neural Information Processing Systems (NeurIPS), 2020. [paper][code]
Michael J Curry*, Ping-Yeh Chiang*, Tom Goldstein, John Dickerson, "Certifying Strategyproof Auction Networks", in the proceeding of the Conference on Neural Information Processing Systems (NeurIPS), 2020. [paper]
Ping-Yeh Chiang, Jonas Geiping, Micah Goldblum, Tom Goldstein, Renkun Ni, Steven Reich, Ali Shafahi, "WITCHcraft: Efficient PGD attacks with random step size", in the proceeding of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2020.[paper]
Ping-Yeh Chiang*, Renkun Ni*, Ahmed Abdelkader, Chen Zhu, Christoph Studor, Tom Goldstein, "Certified Defenses for Adversarial Patches", in the proceeding of the International Conference on Learning Representations (ICLR), 2020.[paper][code]
I'm currently diving deeper into various topics in AI and robotics. This section contains my random thoughts, notes, and resources as I explore new areas.
Coming soon: A collection of interesting findings, observations, and insights from my research and learning journey.
pingyeh.chiang-{AT}-gmail-DOT-edu