Network-Attached Storage (NAS) and Storage Area Networks (SAN) both provide centralized storage, but they do so at fundamentally different layers of the network stack, leading to vastly different performance characteristics and use cases.

Let’s see it in action. Imagine you have a small office with a few employees who need to share documents. A NAS is perfect for this. You buy a NAS device, plug it into your existing Ethernet network, and it appears as a shared folder on everyone’s computer.

Here’s a simplified view of how it works:

  • NAS: Think of a NAS as a dedicated file server. It runs its own operating system and presents storage as files and folders using protocols like NFS (for Linux/Unix) or SMB/CIFS (for Windows). When a client computer needs a file, it asks the NAS for that specific file. The NAS handles the file system operations.

    # On a Linux client, mounting an NFS share from a NAS
    sudo mount -t nfs 192.168.1.100:/exports/shared_docs /mnt/nas_share
    

    In this command:

    • 192.168.1.100 is the IP address of the NAS.
    • /exports/shared_docs is the shared directory on the NAS.
    • /mnt/nas_share is the local mount point on the client.
  • SAN: A SAN, on the other hand, operates at a lower level, presenting storage as raw blocks. It typically uses protocols like Fibre Channel (FC) or iSCSI (over Ethernet). To a server, a SAN looks like a locally attached hard drive. The server’s operating system then formats this block storage with its own file system.

    # On a Linux server, discovering iSCSI targets
    iscsiadm -m discovery -t sendtargets -p 192.168.1.200
    
    # Logging into an iSCSI target to mount the block device
    iscsiadm -m node -T iqn.2003-01.org.linux-iscsi.target.your_target_name:your_lun_id -p 192.168.1.200 --login
    # Then format and mount the new block device (e.g., /dev/sdb)
    sudo mkfs.ext4 /dev/sdb
    sudo mount /dev/sdb /mnt/san_volume
    

    Here:

    • 192.168.1.200 is the IP address of the SAN target (or iSCSI initiator).
    • iqn.2003-01.org.linux-iscsi.target.your_target_name:your_lun_id is the iSCSI Qualified Name (IQN) identifying the specific storage LUN.
    • /dev/sdb would be the new block device presented by the SAN, which the server then formats.

The core problem SANs solve is providing high-performance, block-level access to storage for applications that need it, like databases or virtualization hosts. NAS solves the problem of easy, network-accessible file sharing for end-users and general-purpose applications.

The key difference lies in how the client/server interacts with the storage. With NAS, the client asks for files. With SAN, the client asks for blocks, and the client’s OS manages the file system on top of those blocks. This means a SAN typically offers much lower latency and higher throughput because it bypasses the file system overhead of the NAS device itself. The SAN’s network is often dedicated (e.g., Fibre Channel) or runs over high-speed Ethernet, further enhancing performance.

When you configure a NAS, you’re primarily concerned with creating shares, setting user permissions, and managing the NAS device’s own file system. For a SAN, you’re concerned with LUNs (Logical Unit Numbers – essentially, the "disks" presented by the SAN), zoning (controlling which servers can see which LUNs in a Fibre Channel environment), and the file system configuration on the server that’s accessing the SAN.

Most people don’t realize that when you use iSCSI for your SAN, you’re essentially layering a block-level storage protocol over a standard IP network. This makes SANs more accessible and cost-effective than traditional Fibre Channel SANs, as they can leverage existing Ethernet infrastructure. However, it also means that network congestion on the IP network can directly impact SAN performance, which is why dedicated iSCSI networks are often recommended for critical workloads.

The next step in understanding shared storage is often exploring the nuances of object storage and its differences from block and file storage.

Want structured learning?

Take the full Storage course →