I'm the Head of Applied Research @ VSCO. (We're hiring, plz email!) Before joining VSCO, I was CTO of the TRASH app. Before that I was a Postdoctoral Researcher at Microsoft Research New England. My work is about creating dialog between AI and people. My interests include video understanding, visual attribute discovery, human-in-the-loop systems, fine-grained object recognition, AI for climate science, and active learning. I received my PhD from Brown University in 2016 under the direction of James Hays.


Expert identification of visual primitives used by CNNs during mammogram classification

at Microsoft Research New England | 2018 | Wu, Peck, Hsieh, Dailani, Lehman, Zhou, Syrgkanis, Mackey, Patterson

February 10 - 15, 2018: SPIE Medical Imaging

COCO Attributes

at Hays Lab | 2016 | Patterson, Hays

COCO Attributes Dataset v1 Released
3.5 million (object, attribute) annotation pairs for 180,000 objects. Our ECCV 2016 paper. [ project webpage ]
Github Repository.

Tropel: Crowdsourcing Detectors with Minimal Training

at Hays Lab | 2015 | Patterson, Van Horn, Belongie, Perona, Hays

Creating detectors for arbitrary visual events using only one example and a crowd workforce. HCOMP 2015 Best Paper Finalist.

Scene Attributes Journal Version

at Hays Lab | 2014 | Patterson, Xu, Su, Hays

In this expansion of the original SUN Attributes paper, we demonstrate the usefulness of attributes for scene classification, scene parsing, image captioning and image similarity search. May 2014 IJCV Paper.

Crowd in the loop Active Learning

at Hays Lab | 2013 | Patterson, Van Horn, Belongie, Perona, Hays

In this paper we present a method for bootstrapping classifiers for fine-grained visual classes using minimal labeled data (5-10 positive examples per class). NIPS 2013 Crowd Workshop paper and the poster.

Using humans to build mid-level features

at Hays Lab | 2013 | Patterson, Lin, Hays

We introduce a crowd-in-the-loop method for unsupervised discovery of mid-level patches. We demonstrate this feature discovery process on the 15 scene dataset and compare to methods that do not use crowdsourcing. CVPR 2013 Workshop Paper and the poster.

Basic Level Scene Understanding

at Hays Lab | 2013 | Xiao, Hays, Russell, Patterson, Ehinger, Torralba, Oliva

This paper provides an overview of many methods of scene understanding explored using the SUN dataset. Frontiers in Psychology Paper.

Scene Attributes

at Hays Lab | 2012 | Patterson, Hays

Attribute Classifiers v2 Released Per image, attributes now classified in ~12sec. Please check the project webpage.
Github Repository.

We present the first large-scale scene attribute database. Our attribute database spans more than 700 categories and 14,000 images. Our CVPR 2012 paper. [ project webpage ]

Gathering Attributes on Amazon Mechanical Turk

at Hays Lab | 2011 | Patterson, Hays

For the SUN Attribute dataset project, I worked to build a reliable Turker workforce to label the dataset. A summary of this experience is available in the CVPR 2011-FGCV Workshop paper

CVPR 2011 Fine-Grained Computer Vision Workshop paper and poster.

Masters Research

During my Masters' course I was interested in novel topologies for permanent magnet motors. My general interest was the design and control of electrical machines and electro-magnetic phenomena for transportation (trains, ships, submarines). Please check the award winning ICEMS2009 poster and the paper.


To download this dataset, please visit the project webpage. The image on the left is a 2D TSNE projection of all the objects in the dataset, represented by their attribute vetctor.

To download this dataset, please visit the project webpage. The image on the left is a montage of 4 imgs from the dataset that include the attribute 'camping'.


Deep Learning for Computer Vision: Tufts University Spring 2017

Course Website

Introduction to Deep Learning for Computer Vision Syllabus.

CSCI 2951T: Data-driven Computer Vision, Brown University

Course Website

Seminar course covering topics in State of the Art Computer Vision Syllabus.


  • Fall 2017 : COCO + Places 2017: Workshop for the COCO and Places challenges at ICCV 2017.
  • Fall 2017 : GroupSight 2017: Second Workshop on Human Computation for Image and Video Analysis @ HCOMP 2017.
  • Fall 2016 : ILSVRC + COCO 2016: Workshop for the COCO and ImageNet challenges at ECCV 2016.
  • Fall 2016 : GroupSight 2016: Workshop on Human Computation for Image and Video Analysis @ HCOMP 2016.
  • Fall 2015 : ILSVRC + COCO 2015: Workshop for the COCO and ImageNet challenges at ICCV 2015.