A SAN doesn’t actually move data; it makes storage appear local to servers.

Imagine you have a server and you need more disk space. Traditionally, you’d slap a drive into the server’s chassis. A SAN decouples that physical disk from the server, presenting it as if it were a local device. This is achieved through a network of dedicated storage devices (like switches and HBAs) that use specific protocols (like Fibre Channel or iSCSI) to allow servers to "see" and interact with storage arrays over a network. The server’s operating system thinks it’s talking to a directly attached disk, but it’s actually talking to a block of storage residing on a shared, centralized array.

Let’s see this in action with a simple iSCSI setup.

On the storage array (let’s say a fictional "StorOS" appliance), you’d define a Logical Unit Number (LUN) – this is the actual block of storage.

storos-cli> lun create --name my_app_lun --size 1TB --protocol iscsi
Successfully created LUN 'my_app_lun' with size 1TB.

Then, you’d create an iSCSI target, which is the network endpoint for that LUN.

storos-cli> iscsi target create --name iqn.2023-10.com.example:storos.target1 --lun my_app_lun
Successfully created iSCSI target 'iqn.2023-10.com.example:storos.target1' for LUN 'my_app_lun'.

On the server (a Linux machine), you’d use the iscsiadm utility to discover and log in to this target. First, discover the target portal (the IP address and port of the storage array’s iSCSI service).

sudo iscsiadm -m discovery -t sendtargets -p 192.168.1.100

This might output:

192.168.1.100:3260,1 iqn.2023-10.com.example:storos.target1

Now, log in to the target.

sudo iscsiadm -m node -T iqn.2023-10.com.example:storos.target1 --login

After a successful login, the LUN (my_app_lun) will appear as a new block device on the server, typically something like /dev/sdX.

lsblk

You’d see an entry like:

NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0  500G  0 disk
└─sda1   8:1    0  500G  0 part /
sdb      8:16   0    1T  0 disk  <-- This is your SAN LUN

This sdb device is now ready to be partitioned, formatted, and mounted just like any local disk.

The core problem a SAN solves is storage scalability and centralized management. Instead of managing disks on dozens or hundreds of individual servers, you manage them on one or a few storage arrays. This allows for easier provisioning, expansion, and importantly, features like high availability and disaster recovery that are much harder to implement with direct-attached storage. The architecture works by abstracting the physical storage into logical blocks (LUNs) and making them accessible over a high-speed network, typically Fibre Channel or iSCSI, using specialized hardware (HBAs) and protocols.

The "network" in SAN is crucial. For Fibre Channel, it involves dedicated switches and Host Bus Adapters (HBAs) that operate at a different layer than typical IP networks. For iSCSI, it leverages standard Ethernet infrastructure, but often requires careful network configuration (VLANs, jumbo frames, dedicated NICs) to achieve the performance and low latency needed for block storage. This network fabric is what allows servers to "see" and interact with storage devices that could be physically located in a different rack, a different room, or even a different data center.

Many people think of SANs as just "networked hard drives." The real magic is in the protocol and addressing. Just like TCP/IP allows computers to address each other over a network, Fibre Channel Protocol (FCP) or iSCSI allows servers to address specific LUNs on storage arrays. Each LUN has a unique identifier, and the SAN fabric (switches, routers) ensures that the "address" for that LUN is correctly routed from the server’s HBA or NIC to the storage array. This block-level access is what makes it indistinguishable from local storage to the operating system.

When you initiate a read or write operation on a SAN LUN, the server doesn’t send file data. It sends SCSI commands (or equivalent) targeting specific block addresses on the LUN. These commands travel over the SAN fabric to the storage array. The array then performs the actual read/write on its internal disks and sends the requested data blocks (for reads) or a confirmation (for writes) back to the server over the same fabric. The server’s OS receives this data or confirmation as if it had just completed an operation on a local disk.

The key differentiator from Network Attached Storage (NAS) is the level of abstraction. NAS operates at the file level, presenting shared file systems (like NFS or SMB). A SAN operates at the block level, presenting raw disk volumes. This means the server’s OS manages the file system on top of the SAN-presented LUN. This allows for greater flexibility, as different servers can use different file systems on the same SAN storage, and it’s the foundation for advanced features like storage virtualization and clustering that require direct block-level access.

A common misconception is that SANs are inherently complex and expensive for small deployments. While enterprise SANs can be elaborate, iSCSI SANs can be built using standard Ethernet hardware and software initiators, making them accessible for smaller businesses looking for centralized storage with advanced features. The core components – storage array, network fabric, and host HBAs/initiators – are the same, just scaled down.

The next hurdle in understanding SANs is how they enable features like storage snapshots and replication.

Want structured learning?

Take the full Storage Systems course →