Link Search Menu Expand Document

Course Schedule

Paper reading list and presenters

Jan 31, Tue
Course Overview (Slides)
Chen Sun
  1. (Background) How to Read a CS Research Paper by Philip Fong
  2. (Background) How to do research by Bill Freeman
  3. (Background) How to do write a good paper by Bill Freeman
  4. (Background) How to speak (video) by Patrick Winston
Feb. 2, Thu
Deep Learning Recap (Slides)
Chen Sun
  1. (Background) Novelty in Science by Michael Black
  2. (Background) Everything is Connected: Graph Neural Networks
Feb. 6, Mon
Due Presentation signup sheet
Feb. 7, Tue
Learning with Various Supervision (Slides)
Chen Sun
  1. (Background) How to grow a mind: Statistics, structure, and abstraction
  2. (Background) ICLR Debate with Leslie Kaelbling (video)
  3. (Background) Learning with not Enough Data by Lilian Weng (part1 / part2)
Feb. 9, Thu
The Bitter Lesson (Reading survey / Slides)
Amina, Ilija, and Raymond
  1. Revisiting Unreasonable Effectiveness of Data in the Deep Learning Era
  2. Unbiased Look at Dataset Bias
  3. (Background) The bitter lesson
  4. (Background) The Unreasonable Effectiveness of Data
  5. (Background) Exploring Randomly Wired Neural Networks for Image Recognition
  6. (Background) NAS evaluation is frustratingly hard
Feb. 14, Tue
Semi-supervised Learning (Reading survey / Slides)
Rosella, Patrick, Lingyu, and Michael
  1. Mean teachers are better role models
  2. MixMatch: A Holistic Approach to Semi-Supervised Learning
  3. (Presentation) Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning
  4. (Presentation) FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
  5. (Background) Semi-Supervised Classification with Graph Convolutional Networks
  6. (Background) Inductive Representation Learning on Large Graphs
  7. (Background) Transfer Learning in a Transductive Setting
Feb. 16, Thu
Transfer Learning (Reading survey / Slides)
Wasiwasi, Abubakarr, Yiqing, and Jacob
  1. Learning and Transferring Mid-Level Image Representations using Convolutional Neural Networks
  2. Transfusion: Understanding Transfer Learning for Medical Imaging
  3. (Background) Big Transfer (BiT): General Visual Representation Learning
  4. (Background) Rethinking Pre-training and Self-training
  5. (Background) A Large-scale Study of Representation Learning with the Visual Task Adaptation Benchmark
  6. (Background) Rethinking ImageNet Pre-training
  7. (Background) Natural Language Processing (almost) from Scratch
Feb. 21, Tue
University holiday, no class
Feb. 23, Thu
Few-shot and In-context Learning (Reading survey / Slides)
Sheridan, Shreyas, and Zhuo
  1. Matching Networks for One Shot Learning
  2. Language Models are Few-Shot Learners
  3. (Background) Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
  4. (Background) Prototypical Networks for Few-shot Learning
  5. (Background) Learning to Learn (Blog)
  6. (Background) Rethinking Few-Shot Image Classification: a Good Embedding Is All You Need?
  7. (Background) Zero-shot Recognition via Semantic Embeddings and Knowledge Graphs
  8. (Background) Flamingo: a Visual Language Model for Few-Shot Learning
Feb. 28, Tue
Multitask Learning (Reading survey / Slides)
Noah, Alexander, Pinyuan
  1. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer (Section 1, 2, 4)
  2. A Generalist Agent
  3. (Background) Intelligence without representation
  4. (Background) Multitask Prompted Training Enables Zero-Shot Task Generalization
  5. (Background) Taskonomy: Disentangling Task Transfer Learning
  6. (Background) UberNet: Training a Universal Convolutional Neural Network
  7. (Background) Unified-IO: A Unified Model for Vision, Language, and Multi-Modal Tasks
Mar. 2, Thu
Transformer and its variants (Reading survey / Slides)
Daniel, Yuan, and David
  1. Swin Transformer
  2. Perceiver: General Perception with Iterative Attention
  3. (Background) Synthesizer: Rethinking Self-Attention in Transformer Models
  4. (Background) Long Range Arena: A Benchmark for Efficient Transformers
  5. (Background) MLP-Mixer: An all-MLP Architecture for Vision
  6. (Background) Linformer: Self-Attention with Linear Complexity
  7. (Background) On the Relationship between Self-Attention and Convolutional Layers
Mar. 7, Tue
Self-supervised and Multimodal Learning (Slides)
Chen Sun
  1. (Background) Self-Supervised Representation Learning by Lilian Weng
  2. (Background) Contrastive Representation Learning by Lilian Weng
  3. (Background) Self-Supervised Learning by Andrew Zisserman
Mar. 9, Thu
Self-supervised Learning for NLP (Reading survey / Slides)
Yang, Adrian, and Vignesh
  1. REALM: Retrieval-Augmented Language Model Pre-Training
  2. Discovering Latent Knowledge in Language Models Without Supervision
  3. (Background) Self Supervision Does Not Help Natural Language Supervision at Scale
  4. (Background) How does in-context learning work?
  5. (Background) SpanBERT: Improving Pre-training by Representing and Predicting Spans
  6. (Background) RoBERTa: A Robustly Optimized BERT Pretraining Approach
  7. (Background) Human Language Understanding & Reasoning
  8. (Background) Do Large Language Models Understand Us?
Mar. 9, Thu
Project Final project signup
Due on 3/16
Mar. 10, Fri
Homework First mini project
Due on 4/28
Mar. 14, Tue
Invited Computer Vision for Global Scale Biodiversity Monitoring
Sara Beery
Mar. 16, Thu
Self-supervised Learning for Images and Videos (Reading survey / Slides)
Arthur, Robert, and Siyang
  1. Dimensionality Reduction by Learning an Invariant Mapping
  2. Time-Contrastive Networks: Self-Supervised Learning from Video
  3. (Background) BEiT: BERT Pre-Training of Image Transformers
  4. (Background) Representation Learning with Contrastive Predictive Coding
  5. (Background) Masked Autoencoders Are Scalable Vision Learners
  6. (Background) Deep Clustering for Unsupervised Learning of Visual Features
  7. (Background) Bootstrap your own latent: A new approach to self-supervised Learning
  8. (Background) Learning image representations tied to ego-motion
Mar. 21, Tue
Invited The Future of Computer Vision via Foundation Models and Beyond
Ce Liu
Mar. 23, Thu
Project proposal (Slides)
Mar. 23, Thu
Feedback Mid-semester Feedback Form
Mar. 28, Tue
Homework Second mini project
Due on 4/28
Mar. 28, Tue
Spring break
Mar. 30, Thu
Spring break
Apr. 4, Tue
Reinforcement Learning (Slides)
Chen Sun
Apr. 6, Thu
World Models (Reading survey / Slides)
Ray, Alexander, and Paul
  1. World Models
  2. Learning Latent Dynamics for Planning from Pixels
  3. (Background) Mastering Diverse Domains through World Models
  4. (Background) Control-Aware Representations for Model-based Reinforcement Learning
  5. (Background) Shaping Belief States with Generative Environment Models for RL
  6. (Background) Model-Based Reinforcement Learning: Theory and Practice
  7. (Background) DayDreamer: World Models for Physical Robot Learning
Apr. 11, Tue
Generative Models (Slides)
Calvin Luo
  1. (Background) Understanding Diffusion Models: A Unified Perspective
Apr. 13, Thu
RL from Human Feedback (Reading survey / Slides)
Ziyi, Qi, and Christopher
  1. Deep Reinforcement Learning from Human Preferences
  2. Training language models to follow instructions with human feedback
  3. (Background) Why does ChatGPT constantly lie? by Noah Smith
  4. (Background) ChatGPT Is a Blurry JPEG of the Web by Ted Chiang
  5. (Background) Stanford CS224N
  6. (Background) Learning to summarize from human feedback
  7. (Background) Reinforcement Learning for Language Models
Apr. 18, Tue
Learning from Offline Demonstration (Reading survey / Slides)
Anirudha, Zilai, and Akash
  1. Learning to Act by Watching Unlabeled Online Videos
  2. Offline Reinforcement Learning as One Big Sequence Modeling Problem
  3. (Background) Language Conditioned Imitation Learning over Unstructured Data
  4. (Background) Building Open-Ended Embodied Agents with Internet-Scale Knowledge
  5. (Background) Decision Transformer: Reinforcement Learning via Sequence Modeling
  6. (Background) Understanding the World Through Action
  7. (Background) Learning Latent Plans from Play
Apr. 20, Thu
3D Generation (Reading survey / Slides)
Nitya, Linghai, and Yuan
  1. DreamFusion: Text-to-3D using 2D Diffusion
  2. RenderDiffusion: Image Diffusion for 3D Reconstruction, Inpainting and Generation
  3. (Background) Equivariant Diffusion for Molecule Generation in 3D
  4. (Background) Point-E: A System for Generating 3D Point Clouds from Complex Prompts
  5. (Background) Text-To-4D Dynamic Scene Generation
  6. (Background) Zero-Shot Text-Guided Object Generation with Dream Fields
Apr. 25, Tue
Compositionality (Reading survey / Slides)
Lingze, Suraj, and Apoorv
  1. Learning to Compose Neural Networks for Question Answering
  2. Compositional Visual Generation with Composable Diffusion Models
  3. (Background) Measuring and Narrowing the Compositionality Gap in Language Models
  4. (Background) CREPE: Can Vision-Language Foundation Models Reason Compositionally?
  5. (Background) COGS: A Compositional Generalization Challenge Based on Semantic Interpretation
  6. (Background) Neurocompositional computing: From the Central Paradox of Cognition to a new generation of AI systems
Apr. 27, Thu
Model Interpretability (Reading survey / Slides)
Michael, Haoyu, and Qinan
  1. Do Vision Transformers See Like Convolutional Neural Networks?
  2. Acquisition of Chess Knowledge in AlphaZero
  3. (Background) BERT rediscovers the classical NLP pipeline
  4. (Background) Concept Bottleneck Models
  5. (Background) Tracr: Compiled Transformers as a Laboratory for Interpretability
  6. (Background) Interpreting Neural Networks through the Polytope Lens
  7. (Background) Neural Networks and the Chomsky Hierarchy
Apr. 28, Fri
Due Last day to submit mini projects
May 2, Tue
Final project office hours
May 4, Thu
Final project office hours
May 11, Thu
Final project presentations (CIT 368, noon to 2:30pm) (Slides)
May 12, Fri
Due Project submission (Form)