Resume, But Online1

Oliver Bryniarski
UC Berkeley '21

Abstract

My name is Oliver Bryniarski, and I studied Computer Science and Mathematics at UC Berkeley. I'm passionate about machine learning, math, and research. I worked on machine learning research for a few years (unsupervised learning, adversarial attacks) and am currently working at Ambi Robotics improving a computer vision based robotic picking system. Check out my LinkedIn and Github,

Figure 1: Me
Figure 2: What I Do

Contents

  1. Experience
    1. Ambi Robotics
    2. Amazon
    3. Berkeley Artificial Intelligence Research
    4. Machine Learning @ Berkeley
  2. Publications
    1. Evading Adversarial Example Detection Defenses with Orthogonal Projected Gradient Descent

Experience

Computer Vision Engineer

Ambi Robotics, January 2022 - Current

Researching new solutions to computer vision problems (learning-based and classical) and implementing them efficiently. Greatly improved the pick success rate of our robotic sorting system. I also wrote a significant portion of the code for our machine learning infrastructure. Implemented, optimized, and A/B tested models for Stereo Depth Estimation, OCR, Segmentation, and Classification. Used PyTorch, NumPy, huggingface, and TensorRT for inference. See Figure 2 for a video of our system in action.

Software Development Engineer Intern

Amazon, May 2021 - August 2021

Implemented variable aliasing in Amazon's buyer fraud detection data loading pipeline, fixing a huge pain point for applied scientists.

Undergraduate Researcher

Berkeley Artificial Intelligence Research, June 2020 - December 2021

Worked under David Chan and John Canny on cluster based contrastive learning without a fixed number of clusters. Implemented numerous clustering algorithms and contrastive learning techniques in PyTorch.

Undergraduate Researcher

Machine Learning @ Berkeley, October 2020 - December 2021

Worked with Nicholas Carlini to break adversarial example defenses. Implemented multiple SOTA adversarial attack defenses in PyTorch and broke them with our new attack technique, Orthogonal Projected Gradient Descent. Published the results in ICLR 2022.

Publications

Evading Adversarial Example Detection Defenses with Orthogonal Projected Gradient Descent

Oliver Bryniarski, Nabeel Hingun, Pedro Pachuca, Vincent Wang, Nicholas Carlini

arXiv / code

Evading adversarial example detection defenses requires finding adversarial examples that must simultaneously (a) be misclassified by the model and (b) be detected as non-adversarial. We introduce Orthogonal Projected Gradient Descent, an improved attack technique to generate adversarial examples that avoids this problem by orthogonalizing the gradients when running standard gradient-based attacks. We break four published attacks and show that our technique is effective against a variety of defenses.