Introducción al aprendizaje profundo con PyTorch
Jasmin Ludolf
Senior Data Science Content Developer, DataCamp
$$
Porcentaje de datos | Función | |
---|---|---|
Entrenamiento | 80-90 % | Ajusta los parámetros del modelo |
Validación | 10-20 % | Ajusta los hiperparámetros |
Prueba | 5-10 % | Evalúa el rendimiento final del modelo |
$$
$$
Para cada época:
training_loss = 0.0
for inputs, labels in trainloader: # Run the forward pass outputs = model(inputs) # Compute the loss loss = criterion(outputs, labels)
# Backpropagation loss.backward() # Compute gradients optimizer.step() # Update weights optimizer.zero_grad() # Reset gradients
# Calculate and sum the loss training_loss += loss.item()
epoch_loss = training_loss / len(trainloader)
validation_loss = 0.0 model.eval() # Put model in evaluation mode
with torch.no_grad(): # Disable gradients for efficiency
for inputs, labels in validationloader: # Run the forward pass outputs = model(inputs) # Calculate the loss loss = criterion(outputs, labels) validation_loss += loss.item() epoch_loss = validation_loss / len(validationloader) # Compute mean loss
model.train() # Switch back to training mode
import torchmetrics
# Create accuracy metric metric = torchmetrics.Accuracy(task="multiclass", num_classes=3)
for features, labels in dataloader: outputs = model(features) # Forward pass # Compute batch accuracy (keeping argmax for one-hot labels) metric.update(outputs, labels.argmax(dim=-1))
# Compute accuracy over the whole epoch accuracy = metric.compute()
# Reset metric for the next epoch metric.reset()
Introducción al aprendizaje profundo con PyTorch