The most surprising thing about Weights & Biases is how much it doesn’t change your workflow, and yet fundamentally alters your understanding of it.

Let’s see it in action. Imagine you’re training a deep learning model for image classification. You’ve got your PyTorch or TensorFlow code, and you want to track experiments.

import wandb
import torch
import torch.nn as nn
import torch.optim as optim

# Initialize a W&B run
run = wandb.init(project="image-classification", entity="your-username", job_type="training")

# Define your model
class SimpleCNN(nn.Module):
    def __init__(self):
        super(SimpleCNN, self).__init__()
        self.conv1 = nn.Conv2d(3, 16, kernel_size=3, padding=1)
        self.relu = nn.ReLU()
        self.pool = nn.MaxPool2d(kernel_size=2, stride=2)
        self.fc = nn.Linear(16 * 16 * 16, 10) # Assuming input image size of 32x32

    def forward(self, x):
        x = self.pool(self.relu(self.conv1(x)))
        x = x.view(-1, 16 * 16 * 16)
        x = self.fc(x)
        return x

model = SimpleCNN()
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.001)

# Log hyperparameters to W&B
wandb.config.learning_rate = 0.001
wandb.config.batch_size = 32
wandb.config.epochs = 10
wandb.config.optimizer = "Adam"

# Simulate training loop
for epoch in range(wandb.config.epochs):
    running_loss = 0.0
    for i in range(100): # Simulate mini-batches
        # Dummy data and labels
        inputs = torch.randn(wandb.config.batch_size, 3, 32, 32)
        labels = torch.randint(0, 10, (wandb.config.batch_size,))

        optimizer.zero_grad()
        outputs = model(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()

        running_loss += loss.item()

    epoch_loss = running_loss / 100
    print(f"Epoch {epoch+1}, Loss: {epoch_loss:.4f}")

    # Log metrics to W&B
    wandb.log({"epoch": epoch, "loss": epoch_loss})

# Log the trained model
torch.save(model.state_dict(), "model.pth")
wandb.save("model.pth")

# Finish the run
run.finish()

This snippet shows the core idea: you import wandb, call wandb.init(), and then use wandb.log() to send metrics and wandb.config to track hyperparameters. You can also log artifacts like trained models.

The real power of W&B isn’t just logging; it’s about building a central hub for your ML projects. It addresses the chaos of "which script ran with which parameters on which dataset version?" by providing a structured, searchable, and comparable record of every experiment. You get a dashboard where you can see training curves, compare hyperparameters side-by-side, visualize model predictions, and even set up alerts for performance regressions. It creates a single source of truth for your model development lifecycle.

Internally, W&B consists of a client (the Python library you use) and a server (the cloud-hosted or self-hosted dashboard). The client serializes your data (metrics, configs, system stats, code versions, etc.) and sends it to the server. The server stores this data and presents it through a web interface. The job_type parameter in wandb.init is a simple way to categorize runs, useful for distinguishing training from evaluation or preprocessing steps.

When you log a model using wandb.save("model.pth"), W&B doesn’t just store the file; it creates a "model artifact." This artifact is versioned, meaning if you save another model.pth later, it gets a new version associated with the same artifact name. You can then refer to specific versions of your model in subsequent runs, like for evaluation, ensuring reproducibility. This artifact system is crucial for tracking model lineage and managing deployment.

A common pattern is to use W&B’s run.log_code() at the start of your wandb.init() call. This automatically saves a snapshot of your entire project directory to the W&B server for that specific run. This is incredibly powerful because it captures the exact code that produced a given result, including all dependencies and helper scripts, making it trivial to reproduce any experiment later.

The next step in mastering W&B is exploring its capabilities for hyperparameter optimization with Sweeps.

Want structured learning?

Take the full Wandb course →