Introduction to Deep Learning with PyTorch
Jasmin Ludolf
Senior Data Science Content Developer, DataCamp
$$
Derivative represents the slope of the curve
$$
$$
This is a convex function
This is a non-convex function
$$
$$
$$
Consider a network made of three layers:
# Run a forward pass model = nn.Sequential(nn.Linear(16, 8), nn.Linear(8, 4), nn.Linear(4, 2)) prediction = model(sample)
# Calculate the loss and gradients criterion = CrossEntropyLoss() loss = criterion(prediction, target) loss.backward()
# Access each layer's gradients
model[0].weight.grad
model[0].bias.grad
model[1].weight.grad
model[1].bias.grad
model[2].weight.grad
model[2].bias.grad
# Learning rate is typically small lr = 0.001 # Update the weights weight = model[0].weight weight_grad = model[0].weight.grad
weight = weight - lr * weight_grad
# Update the biases bias = model[0].bias bias_grad = model[0].bias.grad
bias = bias - lr * bias_grad
$$
For non-convex functions, we will use gradient descent
PyTorch simplifies this with optimizers
import torch.optim as optim # Create the optimizer optimizer = optim.SGD(model.parameters(), lr=0.001)
# Perform parameter updates optimizer.step()
Introduction to Deep Learning with PyTorch