Guideline & Policy for Course Projects

Students must work on projects individually. All reports (i.e., paper reading report, proposal, peer-review report, and final project report) must be written in NeurIPS conference format and must be submitted as PDF

The grade will depend on the quality of the ideas, how well you present them in the report, how illuminating and or convincing your experiments are, and well-supported your conclusions are. The project report should be a manageable amount of work, e.g., reproducing an existing model or surveying a research topic.

Length (5%)

It should be 4 to 8 pages, not including appendices or bibliography. Don’t be afraid to keep the text short and to the point, and to include large illustrative figures.

Code & Appendix (15%)

You should submit the compressed file (e.g., in zip format) of PDF and code unless you are doing a pure theoretic project. If that is the case, you should make sure you submit the appendix that include all the proof. You can include as many proofs, extra details, experiments, etc. as you want in the appendices.

Abstract (10%)

It should summarize the main idea of the project and its contributions.

Introduction (15%)

It should clearly state the problem being addressed and or the method being reproduced and why it is important.

Model/Method (30%)

The idea is to make your paper more accessible, especially to readers who are starting by skimming your paper. Here are a few important tips:

Experiments (20%)

It should contain one or more from the following list:

Conclusion & Future Work (5%)

It should consist of the main takeaways of your project. It should also include a discussion on the limitations and potential future directions to improve.

Recommend Paper List

  1. Deep sets
  2. Pointnet: Deep learning on point sets for 3d classification and segmentation
  3. Attention is all you need
  4. An image is worth 16x16 words: Transformers for image recognition at scale.
  5. Learning transferable visual models from natural language supervision
  6. Sequence to sequence learning with neural networks
  7. MLP-Mixer: An all-MLP Architecture for Vision
  8. Semi-Supervised Classification with Graph Convolutional Networks
  9. Gated Graph Sequence Neural Networks
  10. How Powerful are Graph Neural Networks?
  11. Spectral Networks and Locally Connected Networks on Graphs
  12. NerveNet: Learning Structured Policy with Graph Neural Networks
  13. The graph neural network model (the original Graph Neural Networks paper)
  14. Neural Message Passing for Quantum Chemistry
  15. Graph Attention Networks
  16. LanczosNet: Multi-Scale Deep Graph Convolutional Networks
  17. Graph Signal Processing: Overview, Challenges, and Applications
  18. Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering
  19. 3D Graph Neural Networks for RGBD Semantic Segmentation
  20. Few-Shot Learning with Graph Neural Networks
  21. Convolutional Networks on Graphs for Learning Molecular Fingerprints
  22. node2vec: Scalable Feature Learning for Networks
  23. Inductive Representation Learning on Large Graphs
  24. Learning Lane Graph Representations for Motion Forecasting
  25. Representation Learning on Graphs: Methods and Applications
  26. Modeling Relational Data with Graph Convolutional Networks
  27. Hierarchical Graph Representation Learning with Differentiable Pooling
  28. Inference in Probabilistic Graphical Models by Graph Neural Networks
  29. Do Transformers Really Perform Bad for Graph Representation?
  30. Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks
  31. SpAGNN: Spatially-Aware Graph Neural Networks for Relational Behavior Forecasting from Sensor Data
  32. Diffusion Convolutional Recurrent Neural Network: Data-Driven Traffic Forecasting
  33. Geometric Deep Learning: Going beyond Euclidean data
  34. Geometric Deep Learning on Graphs and Manifolds Using Mixture Model CNNs
  35. Dynamic Graph CNN for Learning on Point Clouds
  36. Weisfeiler and Lehman Go Cellular: CW Networks
  37. Provably Powerful Graph Networks
  38. Invariant and Equivariant Graph Networks
  39. On Learning Sets of Symmetric Elements
  40. Relational inductive biases, deep learning, and graph networks
  41. Graph Matching Networks for Learning the Similarity of Graph Structured Objects
  42. Deep Parametric Continuous Convolutional Neural Networks
  43. Neural Execution of Graph Algorithms
  44. Neural Execution Engines: Learning to Execute Subroutines
  45. Learning to Represent Programs with Graphs
  46. Learning to Execute Programs with Instruction Pointer Attention Graph Neural Networks
  47. Pointer Graph Networks
  48. Learning to Solve NP-Complete Problems - A Graph Neural Network for Decision TSP
  49. Premise Selection for Theorem Proving by Deep Graph Embedding
  50. Graph Representations for Higher-Order Logic and Theorem Proving
  51. What Can Neural Networks Reason About?
  52. Discriminative Embeddings of Latent Variable Models for Structured Data
  53. Learning Combinatorial Optimization Algorithms over Graphs
  54. On Layer Normalization in the Transformer Architecture
  55. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
  56. Recipe for a General, Powerful, Scalable Graph Transformer
  57. Variational Graph Auto-Encoders
  58. Deep Graph Infomax
  59. GraphRNN: Generating Realistic Graphs with Deep Auto-regressive Models
  60. Efficient Graph Generation with Graph Recurrent Attention Networks
  61. MolGAN: An implicit generative model for small molecular graphs
  62. GraphVAE: Towards Generation of Small Graphs Using Variational Autoencoders
  63. Learning Deep Generative Models of Graphs
  64. Permutation Invariant Graph Generation via Score-Based Generative Modeling
  65. Graph Normalizing Flows
  66. Constrained Graph Variational Autoencoders for Molecule Design
  67. Generative Code Modeling with Graphs
  68. Structured Denoising Diffusion Models in Discrete State-Spaces
  69. Structured Generative Models of Natural Source Code
  70. A Model to Search for Synthesizable Molecules
  71. Grammar Variational Autoencoder
  72. Scalable Deep Generative Modeling for Sparse Graphs
  73. Energy-Based Processes for Exchangeable Data
  74. Learning Discrete Energy-based Models via Auxiliary-variable Local Exploration
  75. Hierarchical Generation of Molecular Graphs using Structural Motifs
  76. Junction Tree Variational Autoencoder for Molecular Graph Generation
  77. Simple statistical gradient-following algorithms for connectionist reinforcement learning (the original REINFORCE paper)
  78. Neural Discrete Representation Learning
  79. Categorical Reparameterization with Gumbel-Softmax
  80. Neural Relational Inference for Interacting Systems
  81. Contrastive Learning of Structured World Models
  82. The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables
  83. Learning Graph Structure With A Finite-State Automaton Layer
  84. Neural Turing Machines
  85. Oops I Took A Gradient: Scalable Sampling for Discrete Distributions
  86. Direct Policy Gradients: Direct Optimization of Policies in Discrete Action Spaces
  87. Gradient Estimation with Stochastic Softmax Tricks
  88. Differentiation of Blackbox Combinatorial Solvers
  89. REBAR: Low-variance, unbiased gradient estimates for discrete latent variable models
  90. DSDNet: Deep Structured Self-driving Network
  91. Learning to Search with MCTSnets
  92. Direct Loss Minimization for Structured Prediction
  93. Stochastic Beams and Where to Find Them: The Gumbel-Top-k Trick for Sampling Sequences Without Replacement
  94. Direct Optimization through argmax for Discrete Variational Auto-Encoder
  95. Learning Compositional Neural Programs with Recursive Tree Search and Planning
  96. Reinforcement Learning Neural Turing Machines - Revised
  97. The Generalized Reparameterization Gradient
  98. Gradient Estimation Using Stochastic Computation Graphs
  99. Learning to Search Better than Your Teacher
  100. Learning to Search in Branch-and-Bound Algorithms
  101. Model-Based Planning with Discrete and Continuous Actions
  102. Learning Transferable Graph Exploration
  103. Retro*: Learning Retrosynthetic Planning with Neural Guided A* Search
  104. Monte Carlo Gradient Estimation in Machine Learning
  105. Backpropagation through the Void: Optimizing control variates for black-box gradient estimation
  106. Thinking Fast and Slow with Deep Learning and Tree Search
  107. Mastering the Game of Go without Human Knowledge
  108. Memory-Augmented Monte Carlo Tree Search
  109. M-Walk: Learning to Walk over Graphs using Monte Carlo Tree Search