| Date | Jupyter | Colab | Video |
|---|---|---|---|
| May 27, 2025 | Training your First Neural Network (in PyTorch!) | ||
| May 26, 2025 | Automatic Differentiation in PyTorch | ||
| May 23, 2025 | Backpropagation and Computational Graphs | ||
| May 6, 2025 | A Very Gentle Introduction to PyTorch (maybe too gentle?) | ||
| April 7, 2021 | Autoscaling machine learning APIs in Python with Ray | ||
| March 24, 2021 | How does Ray compare to Apache Spark?? | ||
| March 16, 2021 | Stateful Distributed Computing in Python with Ray Actors | ||
| March 15, 2021 | Remote functions in Python with Ray | ||
| March 10, 2021 | Introduction to Distributed Computing with the Ray Framework |
The easiest way to get started with the code (videos or not), is to use a cloud notebook environment/platform like Google Colab (or Kaggle, Paperspace, etc.). For convenience I've provided links to the raw Jupyter notebooks for local development, an NBViewer link if you would like to browse the code without cloning the repo (or you can use the built-in Github viewer), and a Colab link if you would like to interactively run the code without setting up a local development environment (and fighting with CUDA libraries).
If you find any errors in the code or materials, please open a Github issue or email errata@jonathandinu.com.
git clone https://github.com/jonathandinu/youtube.git
cd youtubeCode implemented and tested with Python 3.10.12 (other versions >=3.8 are likely to work fine but buyer beware...). To install all of the packages used across of the notebooks in a local virtual environment:
# pyenv install 3.10.12
python --version
# => Python 3.10.12
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txtUsing uv
uv venv
uv pip install -r requirements.txtIf using
pyenvoruvto manage Python versions, they both should automatically use the version listed in.python-versionwhen changing into this directory.
Additionally, the notebooks are setup with a cell to automatically select an appropriate device (GPU) based on what is available. If on a Windows or Linux machine, both NVIDIA and AMD GPUs should work (though this has only been tested with NVIDIA). And if on an Apple Silicon Mac, Metal Performance Shaders will be used.
import torch
# default device boilerplate
device = (
"cuda" # Device for NVIDIA or AMD GPUs
if torch.cuda.is_available()
else "mps" # Device for Apple Silicon (Metal Performance Shaders)
if torch.backends.mps.is_available()
else "cpu"
)
print(f"Using {device} device")If no compatible device can be found, the code will default to a CPU backend. This should be fine for Lessons 1 and 2 but for any of the image generation examples (pretty much everything after lesson 2), not using a GPU will likely be uncomfortably slow—in that case I would recommend using the Google Colab links in the table above.
©️ 2024 Jonathan Dinu. All Rights Reserved. Removal of this copyright notice or reproduction in part or whole of the text, images, and/or code is expressly prohibited. For permission to use the content please contact copyright@jonathandinu.com.