Weights & Biases (W&B) config management isn’t just about logging hyperparameters; it’s about creating an immutable, auditable record of your experiment’s intent that survives even if you forget to log something manually.
Let’s see it in action. Imagine you’re training a simple neural network with TensorFlow.
import wandb
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
# 1. Initialize W&B run and set configuration
run = wandb.init(project="my-tf-project",
config={
"learning_rate": 0.01,
"epochs": 10,
"batch_size": 32,
"layer_1_units": 64,
"activation_1": "relu",
"optimizer": "adam"
})
# Access the config object directly
print(f"Learning rate from config: {wandb.config.learning_rate}")
print(f"Number of epochs: {wandb.config.epochs}")
# 2. Build your model using config values
model = Sequential([
Dense(wandb.config.layer_1_units, activation=wandb.config.activation_1, input_shape=(784,)),
Dense(10) # Output layer for MNIST
])
optimizer_name = wandb.config.optimizer
if optimizer_name == "adam":
optimizer = tf.keras.optimizers.Adam(learning_rate=wandb.config.learning_rate)
elif optimizer_name == "sgd":
optimizer = tf.keras.optimizers.SGD(learning_rate=wandb.config.learning_rate)
else:
raise ValueError(f"Unknown optimizer: {optimizer_name}")
model.compile(optimizer=optimizer,
loss="sparse_categorical_crossentropy",
metrics=["accuracy"])
# 3. Load dummy data (replace with your actual data loading)
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train = x_train.reshape(-1, 784).astype('float32') / 255.0
x_test = x_test.reshape(-1, 784).astype('float32') / 255.0
# 4. Train the model
history = model.fit(x_train, y_train,
epochs=wandb.config.epochs,
batch_size=wandb.config.batch_size,
validation_split=0.1, # Using a split for validation
callbacks=[wandb.keras.WandbCallback()]) # Log metrics automatically
# 5. Finish the run
wandb.finish()
When you run this, W&B automatically logs the config dictionary you passed to wandb.init. This isn’t just a snapshot; it becomes the source of truth for this specific experiment. You can then access wandb.config.learning_rate, wandb.config.epochs, etc., directly within your script. This decouples your code from hardcoded values and makes it immediately clear what parameters were used for a given run.
The real power emerges when you want to search and compare experiments. On the W&B dashboard, you’ll see a "Config" tab for each run. You can filter runs based on these config values. Want to find all runs where learning_rate was 0.001 and layer_1_units was 128? Easy. Want to see how performance varied across different activation_1 functions? Just group by that config key. This is crucial for hyperparameter sweeps, where W&B orchestrates multiple runs with systematically varied configurations.
Internally, wandb.init(config=...) takes your dictionary and creates a wandb.Config object. This object is a special dictionary-like structure that W&B uses to track changes. When you access a value like wandb.config.learning_rate, you’re reading from this persistent record. If you were to change a value within your script after wandb.init, like wandb.config.learning_rate = 0.005, W&B would log that change as well, creating a history of configuration modifications within that single run.
The wandb.keras.WandbCallback is a prime example of how config integrates. It doesn’t just log metrics; it uses the wandb.config to label those metrics. When you view your training curves, you’ll see them automatically associated with the specific hyperparameters that produced them. This makes it trivial to compare performance directly against the settings that generated it, without needing to manually cross-reference logs.
What most people miss is that wandb.config is also a powerful tool for programmatically controlling your experiment flow, not just for logging. You can use conditional logic based on config values to switch optimizers, adjust data preprocessing, or even load different model architectures, all driven by the initial configuration. This allows for dynamic experiment setups that are still fully reproducible via the recorded config.
The next step after mastering per-run config is understanding how to programmatically search and filter runs based on these logged configurations using the W&B API.