Quiz on AI Interviews Prep Live Training Corporate Training

Introduction to PyTorch

PyTorch is an open-source machine learning framework developed by Facebook AI Research. It is widely used for deep learning due to its flexibility, ease of use, and strong GPU acceleration.

This tutorial assumes basic knowledge of Python and NumPy.

1. Installing PyTorch

Install PyTorch using pip:

pip install torch torchvision torchaudio

Verify installation:

import torch
print(torch.__version__)

2. Tensors in PyTorch

Tensors are the core data structure in PyTorch, similar to NumPy arrays but with GPU support.

Creating Tensors

import torch

# From a list
x = torch.tensor([1, 2, 3])
print(x)

# Zeros and ones
zeros = torch.zeros(2, 3)
ones = torch.ones(2, 3)

# Random values
rand = torch.rand(2, 3)

Tensor Operations

a = torch.tensor([1, 2])
b = torch.tensor([3, 4])

# Addition
print(a + b)

# Element-wise multiplication
print(a * b)

# Matrix multiplication
m1 = torch.rand(2, 3)
m2 = torch.rand(3, 2)
print(torch.matmul(m1, m2))

3. Autograd: Automatic Differentiation

PyTorch automatically computes gradients using autograd, which is essential for training neural networks.

x = torch.tensor(2.0, requires_grad=True)
y = x ** 2 + 3 * x + 1

# Compute gradients
y.backward()

print(x.grad)  # dy/dx

Here, PyTorch computes the derivative of y with respect to x.

4. Building a Simple Neural Network

PyTorch provides the torch.nn module to define neural networks.

import torch
import torch.nn as nn

# Define a simple neural network
class SimpleNet(nn.Module):
    def __init__(self):
        super(SimpleNet, self).__init__()
        self.fc1 = nn.Linear(2, 4)
        self.fc2 = nn.Linear(4, 1)

    def forward(self, x):
        x = torch.relu(self.fc1(x))
        x = self.fc2(x)
        return x

model = SimpleNet()
print(model)

5. Loss Function and Optimizer

Loss functions measure prediction error, and optimizers update model weights.

import torch.optim as optim

# Mean Squared Error loss
criterion = nn.MSELoss()

# Stochastic Gradient Descent optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01)

6. Training Loop Example

A typical training loop includes forward pass, loss calculation, backward pass, and weight updates.

# Dummy dataset
inputs = torch.rand(10, 2)
targets = torch.rand(10, 1)

for epoch in range(100):
    # Forward pass
    outputs = model(inputs)
    loss = criterion(outputs, targets)

    # Backward pass
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

    if epoch % 20 == 0:
        print(f"Epoch {epoch}, Loss: {loss.item():.4f}")

7. Using GPU (Optional)

PyTorch makes it easy to use GPUs if available.

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

model.to(device)
inputs = inputs.to(device)
targets = targets.to(device)

8. Summary

  • PyTorch uses tensors as its core data structure
  • Autograd enables automatic differentiation
  • torch.nn helps build neural networks
  • Training involves loss computation and optimization
  • GPU acceleration is easy to enable