Scalable AI Models with PyTorch Lightning
Sergiy Tkachuk
Director, GenAI Productivity
def training_step(self, batch, batch_idx): x, y = batch
y_hat = self(x)
loss = cross_entropy(y_hat, y)
self.log("train_loss", loss) return loss
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=1e-3)
return optimizer
$$
$$
trainer.fit(model, train_dataloader)
trainer.validate(model, val_dataloader)
$$
class LightClassifier(pl.LightningModule): def __init__(self): super().__init__() self.layer=torch.nn.Linear(28 * 28, 10) def forward(self, x): return self.layer(x.view(x.size(0), -1))
def training_step(self, batch, batch_idx): ...
def configure_optimizers(self): params=self.parameters() optimizer=torch.optim.Adam(params,lr=1e-3) return optimizer
model = LightClassifier() # Define classifier model trainer = Trainer(max_epochs=5) # Define trainer trainer.fit(model, train_dataloader) trainer.validate(model, val_dataloader)
Why training logic matters?
Real-world examples:
Why training logic matters?
Real-world examples:
Scalable AI Models with PyTorch Lightning