Logging your first W&B run isn’t about just saving metrics; it’s about creating a living document of your experiment that tells the whole story.

Let’s get this going. First, you need to install the library:

pip install wandb

Now, you’ll need to log in to your Weights & Biases account. If you don’t have one, head over to wandb.ai and sign up – it’s free for individuals and small teams. Once you have an account, run this in your terminal:

wandb login

This will prompt you for an API key. You can find your API key on your W&B settings page. Paste it in, and you’re authenticated.

Now, let’s write a tiny Python script to log a run. Imagine you’re training a simple model and want to track its accuracy and loss.

import wandb
import random

# Initialize a new W&B run
wandb.init(project="my-first-project", job_type="training")

# Simulate some training steps
for i in range(100):
    accuracy = random.random()
    loss = 1 - accuracy  # Simple inverse relationship for demo

    # Log metrics for this step
    wandb.log({"accuracy": accuracy, "loss": loss})

# Finish the run
wandb.finish()

Save this as log_run.py and execute it:

python log_run.py

As soon as you run this, a new entry will appear on your W&B dashboard. You’ll see a link in your terminal output that takes you directly to the run’s page. This page is where the magic happens. You’ll see live plots of your accuracy and loss updating as the script runs. If you were running this on a more complex training job, you’d see these metrics populate in real-time, giving you immediate feedback.

The wandb.init() call is the entry point. The project argument groups related runs together. Think of it as a folder for your experiments. job_type is a label you can assign to distinguish different kinds of runs within a project (e.g., "training", "evaluation", "hyperparameter-sweep").

Inside the loop, wandb.log() is the workhorse. It accepts a dictionary where keys are the metric names and values are the corresponding metric values. W&B automatically handles plotting these for you. You can log any Python serializable data here – numbers, strings, lists, dictionaries.

The wandb.finish() call signals the end of the run. It’s good practice to include this, especially in longer-running scripts or when you’re not running interactively.

Beyond metrics, you can log much more. For instance, to save your model’s configuration, you’d pass it to wandb.init():

import wandb
import random

config = {"learning_rate": 0.01, "epochs": 100, "batch_size": 32}

wandb.init(project="my-first-project", job_type="training", config=config)

for i in range(100):
    accuracy = random.random()
    loss = 1 - accuracy
    wandb.log({"accuracy": accuracy, "loss": loss})

wandb.finish()

Now, on your run’s dashboard, you’ll see a "Config" tab populated with these hyperparameters. This is crucial for reproducibility – you know exactly what settings produced a given result. You can later compare runs and see how different configurations affect performance.

A common pitfall when starting is forgetting to wandb.finish(). If your script crashes unexpectedly or exits without calling finish(), the run might appear as "crashed" or "unfinished" on the dashboard. While W&B often handles this gracefully, explicit calls ensure clean termination. Also, ensure your project name is consistent if you want runs to be grouped together logically.

The next step is exploring how to automatically log model checkpoints and code versions.

Want structured learning?

Take the full Wandb course →