Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion our-initiatives/tutorials/2024-2025/_category_.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"label": "2024-2025",
"position": 1,
"position": 2,
"link": {
"type": "doc",
"id": "tutorials/2024-2025/index"
Expand Down
1 change: 1 addition & 0 deletions our-initiatives/tutorials/2024-2025/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ This academic year, the tutorial series is being delivered by the following peop
- [Paul Chaminieu](#) (ML Officer)
- [Anna-Maria](#) (ML Officer)
- [Franciszek Nowak](#) (ML Officer - Visual Computing I)
- [James Ray](#) (ML Officer - Generative Visual Computing)

## DOXA Challenges

Expand Down
14 changes: 14 additions & 0 deletions our-initiatives/tutorials/2024-2025/intro_to_transformers.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
---
sidebar_position: 11
---

# 9: Introduction to Transformers

**Date: 11th December 2024**

💡 **Transformers** were initially introduced for the purpose of **machine translation**, but is now the most prevalent (State Of The Art) architecture used for virtually all deep learning tasks. Unlike traditional neural networks, Transformers rely on a mechanism called **attention**, which allows them to focus on relevant parts of the input sequence. Unlike RNNs this architecture takes in sequential input data in parallel.

Central to this model are the **encoder-decoder blocks**, where input data undergoes **tokenization** and is embedded into vectors with **positional encodings** to capture word order. This week, we will explore the **attention mechanism**, including **multi-headed attention**, the structure of **encoder and decoder blocks**, and the processes involved in **training Transformers**, such as **tokenization, masking strategies**, and managing **computational costs**.
💡

You can access our **slides** here: 💻 [**Tutorial 9 Slides**](https://www.canva.com/design/DAGYOwRh8u8/xn2OqkUHgTGClSoYOhSxYQ/view?utm_content=DAGYOwRh8u8&utm_campaign=designshare&utm_medium=link2&utm_source=uniquelinks&utlId=ha097b37913)
2 changes: 1 addition & 1 deletion our-initiatives/tutorials/2024-2025/neural-networks.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
sidebar_position: 7
---

# 4: Neural Networks
# 5: Neural Networks

**Date: 13th November 2024**

Expand Down
14 changes: 14 additions & 0 deletions our-initiatives/tutorials/2024-2025/rnns.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
---
sidebar_position: 10
---

# 8: Recurrent Neural Networks

**Date: 4th December 2024**

💡 **Recurrent Neural Networks (RNNs)** are a class of models designed to handle sequential data, such as **time series** or **language**, by using **feedback loops** to maintain **context** over time. This week, we will explore the fundamentals of RNNs, the challenges of training them—especially backpropagation through time—and the introduction of variants like **Long Short-Term Memory (LSTM)** networks that better capture **long-term dependencies**. We will briefly mention contrast these approaches with **transformers**, which have largely replaced RNNs and LSTMs in state-of-the-art applications by using self-attention mechanisms to model sequence elements in parallel, ultimately offering a broader perspective on modern sequence modeling techniques.💡

You can access our **demonstration notebook** here: 📘 [**Tutorial 8 Notebook**](https://github.com/UCLAIS/ml-tutorials-season-5/blob/main/week-8/rnn.ipynb)

You can access our **slides** here: 💻 [**Tutorial 8 Slides**](https://www.canva.com/design/DAGSEPaNv_I/RpD2FqJCqnRyZxwa_cvsGQ/view?utm_content=DAGSEPaNv_I&utm_campaign=designshare&utm_medium=link2&utm_source=uniquelinks&utlId=h053c9bd49f)

2 changes: 1 addition & 1 deletion our-initiatives/tutorials/2024-2025/visual-computing-1.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
sidebar_position: 8
---

# 5: Visual Computing I
# 6: Visual Computing I

**Date: 20th November 2024**

Expand Down
27 changes: 27 additions & 0 deletions our-initiatives/tutorials/2024-2025/visual-computing-2.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
---
sidebar_position: 9
---

# 7: Generative Visual Computing

**Date: 27th November 2024**

💡 This week, we'll dive into the exciting world of generative models for computer vision! We'll explore how to create models that can learn the intrinsic features of a dataset and generate new images. We'll focus on building an **auto-encoder**, a powerful tool for capturing the essence of visual data in a compressed **latent space**.

A latent space is a lower-dimensional representation that encodes the most important features of the data. By learning this compact representation, generative models can create new images that resemble the original dataset. We'll introduce you to various state-of-the-art models used in industry and research, such as **Variational Auto-Encoders** (VAEs), **Generative Adversarial Networks** (GANs) and **Diffussion Models**!💡

Also, I want to remind you guys that we are running a [**DOXA challenge**](https://doxaai.com/competition/cifar-10) and which you can get prizes for!

•⁠ ⁠⁠1st place will get a Mystery prize 🍫 + AI Society Shirt 👕+ Pen🖊️
•⁠ ⁠⁠2nd place will receive AI Society Shirt 👕 + Pen. 🖊️(**NOTE** that 1st and 2nd place have to achieve a score greater than 0.8074)
•⁠ ⁠⁠⁠The remaining participants will receive UCL AI Society Pens as long as you can get a score above Jeremy (0.6077)

The deadline for submission is Wednesday, December 4th, 5:59PM which is right before our next session on RNNs.

For more information on how to do better in the challenge access the last 10 slides and watch our tutorial recording below.

You can access our **slides** here: 💻 [**Tutorial 7 Slides**](https://www.canva.com/design/DAGSEDAKiHs/jRkDsMJRc65jzSe0KgmbYg/view?utm_content=DAGSEDAKiHs&utm_campaign=designshare&utm_medium=link&utm_source=editor)

The **recording** from this session is available here: 🎤 [**Tutorial 7 Recording**](https://youtu.be/5ceoctSndC0)

We did not go through the **demonstration notebook** during session, but you can access our it here: 📘 [**Tutorial 7 Notebook**](https://github.com/UCLAIS/ml-tutorials-season-5/blob/main/week-7/VAE_andAE.ipynb)
2 changes: 1 addition & 1 deletion our-initiatives/tutorials/_category_.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"label": "💻 ML Tutorial Series",
"position": 2,
"position": 1,
"link": {
"type": "doc",
"id": "tutorials/2024-2025/index"
Expand Down
10 changes: 6 additions & 4 deletions our-initiatives/tutorials/index.mdx
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
sidebar_position: 0
sidebar_position: 1
---

import DocCardList from '@theme/DocCardList'
Expand All @@ -18,6 +18,8 @@ This academic year, the tutorial series is being delivered by the following peop
- [Zachary Baker](#) (ML Officer)
- [Paul Chaminieu](#) (ML Officer)
- [Anna-Maria](#) (ML Officer)
- [Franciszek Nowak](#) (ML Officer - Visual Computing I)
- [James Ray](#) (ML Officer - Generative Visual Computing)

## DOXA Challenges

Expand All @@ -40,22 +42,22 @@ During the first half term, we aim to cover basic concepts of **classical ML**:
- Tutorial 0: **Introduction to AI**
- Tutorial 1: **Introduction to Python**
- Tutorial 2: **Regression**
- Tutorial 3: **Classification I**
- Tutorial 3: **Classification I** (Doxa)
- Tutorial 4: **Classification II**

After reading week, we will focus on **Deep Learning**!

- Tutorial 5: **Neural Networks**
- Tutorial 6: **Visual Computing I** (Doxa)
- Tutorial 7: **Generative visual computing**
- Tutorial 8: **Recurrent Neural Networks** (Doxa)
- Tutorial 8: **Recurrent Neural Networks**
- Tutorial 9: **Introduction to Transforments**

### Term 2

- Tutorial 10: **Natural Language Processing I**
- Tutorial 11: **Natural Language Processing II**
- Tutorial 12: **Graph neural networks / Reinforcement learning**
- Tutorial 12: **Graph Neural Networks / Reinforcement Learning**

## Previous Seasons

Expand Down
Loading