## Outline

Shared with: Bruno Galerne

Timetable: Monday 9:30 to 12:30

All the information about the course can be found here.
The image in the description of the course is extracted from the paper Score-Based Generative Modeling with Critically-Damped Langevin Diffusion
.

For every question/remark related to the course please send an email to generative.modeling.mva@gmail.com

## Slides

## Lab session

## Assignement

Due on **Monday 20/03 2023**.

## Final exam

For the final exam the students will study one research paper by group of 3
(minimum) and 4 (maximum). ** Fill out the Google form for your group and choice of paper before Monday 13/03 2023 (hard deadline). **

Report should be sent to generative.modeling.mva@gmail.com before ** Friday 28/04 2023 (hard deadline)**.
Oral presentations will take place (hybrid mode available) in May.

For each paper, the students must:

The report must include theoretical, methodological and experimental considerations. You will be evaluated on these three aspects. Please indicate the contribution of each member of the team in the report.

We do not expect the students to reproduce some of the high-dimensional experiments presented in some of these papers as we are aware of the compute limitations.

You don't have to answer all the questions listed under each paper. These are merely indications of potentially interesting avenues but feel free to pursue other directions. Most of these questions are open research problems.

The group and chosen paper must be reported in this form: Google Form

Please fill only one form per group!

Format: report of 10 pages maximum (code not included). Additional material containing proofs/notebook is appreciated. Your report is due on TBA.

You will also present your work during the final evaluation (TBA). The format of this presentation is 30 minutes by group.

The final grade will be $\tfrac{2}{3} G_1 + \tfrac{1}{3} G_2$ where $G_1$ is the grade obtained on the report and $G_2$ the grade obtained on the presentation of your work.

**Flow Matching for Generative Modeling**(Arxiv Link)

- How this model compared to classical diffusion models? (advantages/inconvenients)
- Can you derive a theoretical framework for flow matching similar to the one of diffusion models?
- How can you leverage the additional flexibility of these models?

**Analog Bits: Generating Discrete Data using Diffusion Models with Self-Conditioning**(Arxiv Link)

- Can you give a theoretical explanation of the self-conditioning procedure?
- What are the limitations of such quantized approaches?
- What is the experimental importance of self-conditioning?

**A Variational Perspective on Diffusion-Based Generative Models and Score Matching**(Arxiv Link)

- What is the importance of a variational framework? What does it imply theoretically speaking?
- How do you interpret the discrepancy between the loss obtained using the variational framework and the one used in practice?
- How do diffusion models fit into the variational encoder family?

**Denoising Diffusion Restoration Models**(Arxiv Link)

- Can you provide a time-continuous interpretation of this model?
- Can you extend the model to non-linear problems?
- How can you combine this model with other inverse problem diffusion model techniques?

**Denoising Diffusion Implicit Models**(Arxiv Link)>

- Can you provide a time-continuous interpretation of this model?
- What are the benefits/drawbacks of deterministic sampling compared to stochastic sampling?
- What other probabilistic decomposition could be used to define diffusion models?

**Likelihood training of Schrödinger Bridge**(Arxiv Link)

- What are the benefits of this formulation over classical diffusion models?
- What are the links between this approach and stochastic control? How can we leverage this link?
- Can this framework be extended to solve any quasi-parabolic PDEs? What are the potential applications?