Attaching a block storage volume to a compute instance is like plugging a new hard drive into your server, but with more networking involved.
Let’s see how this looks in practice. Imagine you’ve got a virtual machine (VM) running, and you need more disk space for your application.
# First, let's list our available block storage volumes.
# We're looking for one that's not currently attached.
lsblk -o NAME,SIZE,MODEL,VENDOR,SERIAL,TYPE,MOUNTPOINT,PKNAME
# Output might look like:
# NAME SIZE MODEL VENDOR SERIAL TYPE MOUNTPOINT PKNAME
# sda 50G Virtio QEMU/KVM 1234567890 disk
# sr0 1024M QEMU/KVM cdrom
# vdb 100G QEMU/KVM disk <-- This is our unattached volume!
Now, we need to tell the system to actually connect vdb to our VM. This is usually done via an API call to your cloud provider or hypervisor. For example, using nova (OpenStack’s compute service):
# Attach the volume 'your-volume-id' to the server 'your-server-id'
nova volume-attach your-server-id your-volume-id
The system responds, confirming the attachment. On the VM, you’ll see a new block device appear. If it’s the first extra disk, it’s often /dev/vdb (or /dev/sdb on some systems).
# After attachment, run lsblk again on the VM.
lsblk -o NAME,SIZE,MODEL,VENDOR,SERIAL,TYPE,MOUNTPOINT,PKNAME
# Output now shows the attached volume:
# NAME SIZE MODEL VENDOR SERIAL TYPE MOUNTPOINT PKNAME
# sda 50G Virtio QEMU/KVM 1234567890 disk
# sr0 1024M QEMU/KVM cdrom
# vdb 100G QEMU/KVM disk sda
Notice PKNAME for vdb is now sda, indicating it’s attached to the primary disk’s controller.
The next crucial step is to format this new disk so your operating system can use it. If you try to mount it without formatting, it won’t work.
# Format the new volume with ext4 filesystem.
# WARNING: This will erase any data on the volume.
mkfs.ext4 /dev/vdb
Once formatted, you can create a mount point (a directory) and then mount the new filesystem.
# Create a directory to mount the volume.
mkdir /data
# Mount the formatted volume to the new directory.
mount /dev/vdb /data
To make this mount persistent across reboots, you’ll need to add an entry to /etc/fstab.
# Get the UUID of the new filesystem.
blkid /dev/vdb
# Example output:
# /dev/vdb: UUID="a1b2c3d4-e5f6-7890-1234-abcdef012345" TYPE="ext4" PARTUUID="..."
# Add this line to your /etc/fstab file:
# UUID=a1b2c3d4-e5f6-7890-1234-abcdef012345 /data ext4 defaults 0 2
This setup allows you to decouple your data from your compute instances. You can detach a volume from one VM and attach it to another, migrating your data without reformatting or reinstalling. The key is that the volume itself is a persistent resource, managed independently of the ephemeral compute instance.
The actual mechanism involves a storage controller on the host hypervisor that presents the block device to the guest VM. This presentation can be done via various protocols like VirtIO-SCSI or iSCSI, abstracting the underlying storage hardware. The guest OS then sees this as a local block device, unaware of the network hops or storage arrays that might be involved.
When you detach a volume, the hypervisor simply revokes access from the VM, and the storage controller marks it as available for attachment elsewhere. The data remains on the storage array, safe and sound.
A common pitfall is forgetting to unmount a filesystem before detaching the volume from the hypervisor. If you detach a volume that’s actively mounted and has I/O operations in progress, you risk data corruption. Always ensure umount /data (or wherever you mounted it) completes successfully before initiating the detach operation.
The next step is typically managing snapshots of these volumes for backup and disaster recovery.