Storage area network
Clash Royale CLAN TAG#URR8PPP
Computer network types by spatial scope |
---|
|
A storage area network (SAN) or storage network is a Computer network which provides access to consolidated, block level data storage. SANs are primarily used to enhance accessibility of storage devices, such as disk arrays and tape libraries, accessible to servers so that the devices appear to the operating system as locally attached devices. A SAN typically is a dedicated network of storage devices not accessible through the local area network (LAN) by other devices, thereby preventing interference of LAN traffic in data transfer.
The cost and complexity of SANs dropped in the early 2000s to levels allowing wider adoption across both enterprise and small to medium-sized business environments.
A SAN does not provide file abstraction, only block-level operations. However, file systems built on top of SANs do provide file-level access, and are known as shared-disk file systems.
Contents
1 Storage architectures
2 SAN components
2.1 Host layer
2.2 Fabric layer
2.3 Storage layer
3 SAN Network Protocols
4 SAN software
5 SAN filesystems
6 In media and entertainment
7 Quality of service
8 Storage virtualization
9 See also
10 References
11 External links
Storage architectures
Storage area networks (SANs) are sometimes referred to as network behind the servers[1]:11 and historically developed out of the centralised data storage model, but with its own data network. A SAN is, at its simplest, a dedicated network for data storage. In addition to storing data, SANs allow for the automatic backup of data, and the monitoring of the storage as well as the backup process.[2]:16–17 A SAN is a combination of hardware and software.[2]:9 It grew out of data-centric mainframe architectures, where clients in a network can connect to several servers that store different types of data.[2]:11 To scale storage capacities as the volumes of data grew, direct attached storage (DAS) was developed, where disk arrays or just a bunch of disks (JBODs) were attached to servers. In this architecture storage devices can be added to increase storage capacity. However, the server through which the storage devices are accessed is a single point of failure, and a large part of the LAN network bandwidth is used for accessing, storing and backing up data. To solve the single point of failure issue, a direct attached shared storage architecture was implemented, where several servers could access the same storage device.[2]:16–17
DAS was the first network storage system and is still widely implemented where data storage requirements are not very high. Out of it developed the network attached storage (NAS) architecture, where one or more dedicated file server or storage devices are made available in a LAN.[2]:18 Therefore, the transfer of data, particularly for backup, still takes place over the existing LAN. If more than a terabyte of data was stored at any one time, LAN bandwidth became a bottleneck.[2]:21–22 Therefore, SANs were developed, where a dedicated storage network was attached to the LAN, and terabytes of data are transferred over a dedicated high speed and bandwidth network. Within the storage network, storage devices are interconnected. Transfer of data between storage devices, such as for backup, happens behind the servers and is meant to be transparent.[2]:22 While in a NAS architecture data is transferred using the TCP and IP protocols over Ethernet, distinct protocols were developed for SANs, such as Fibre Channel, iSCSI, Infiniband. Therefore, SANs often have their own network and storage devices, which have to be bought, installed, and configured. This makes SANs inherently more expensive than NAS architectures.[2]:29
SAN components
SANs have their own networking devices, such as SAN switches. To access the SAN so-called SAN servers are used, which in turn connect to SAN interfaces. Within the SAN a range of data storage devices may be interconnected, such as SAN capable disk arrays, JBODS and tape libraries.[2]:32,35–36
Host layer
Servers that allow access to the SAN and its storage devices are said to form the host layer of the SAN. Such servers have host bus adapters (HBAs), that are hardware cards that attach to slots on the server main board (usually PCI slots) and runs with a corresponding firmware and driver. Through the host bus adapters the operating system of the server can communicate with the storage devices in the SAN.[3]:26 A cable connects to the host bus adapter card through the gigabit interface converter (GBIC). These interface converters are also attached to switches and storage devices within the SAN, and they convert digital bits into light impulses that can then be transmitted over the fiber channel cables. Conversely, the GBIC converts incoming light impulses back into digital bits. The predecessor of the GBIC was called gigabit link module (GLM).[3]:27 This is applicable for fiber channel deployments only.
Fabric layer
The fabric layer consists of SAN networking devices that include SAN switches, routers, protocol bridges, gateway devices, and cables. SAN network devices move data within the SAN, or between an initiator, such as an HBA port of a server, and a target, such as the port of a storage device. SAN networks are usually built with redundancy, so SAN switches are connected with redundant links. SAN switches connect the servers with the storage devices and are typically non-blocking, thus transmitting data across all attached wires at the same time.[3]:29 When SANs were first built, hubs were the only devices that were fibre channel capable, but fibre channel switches were developed and hubs are now rarely found in SANs. Switches have the advantage over hubs that they allow all attached devices to communicate simultaneously, as a switch provides a dedicated link to connect all its ports with one another.[3]:34 SAN switches are for redundancy purposes set up in a meshed topology. A single SAN switch can have as few as 8 ports, up to 32 ports with modular extensions.[3]:35 So called director class switches can have as many as 128 ports.[3]:36 When SANs were first built fibre channel had to be implemented over copper cables, these days multimode optical fibre cables are used in SANs.[3]:40 In switched SANs the fibre channel switched fabric protocol FC-SW-6 is used, where every device in the SAN has a hardcoded World Wide Name (WWN) address in the host bus adapter (HBA). If a device is connected to the SAN its WWN is registered in the SAN switch name server.[3]:47 In place of a WWN, or worldwide port name (WWPN), SAN fibre channel storage device vendors may also hardcode a worldwide node name (WWNN). The ports of storage devices often have an WWN starting with 5, while the bus adapters of servers start with 10 or 21.[3]:47
Storage layer
On top of the Fibre Channel-Switched Protocol is often the serialized Small Computer Systems Interface (SCSI) protocol, implemented in servers and SAN storage devices. It allows software applications to communicate, or encode data, for storage devices. The internet Small Computer Systems Interface (iSCSI) over Ethernet and the Infiniband protocols may also be found implemented in SANs, but are often bridged into the fibre channel SAN.[3]:47 However, Infiniband and iSCSI storage devices, in particular, disk arrays, are available.[3]:48
The various storage devices in a SAN are said to form the storage layer. It can include a variety of hard disk and magnetic tape devices that store data. In SANs disk arrays are joined through a RAID, which makes a lot of hard disks look and perform like one big storage device.[3]:48 Every storage device, or even partition on that storage device, has a logical unit number (LUN) assigned to it. This is a unique number within the SAN and every node in the SAN, be it a server or another storage device, can access the storage through the LUN. The LUNs allow for the storage capacity of a SAN to be segmented and for the implementation of access controls. A particular server, or a group of servers, may, for example, be only given access to a particular part of the SAN storage layer, in the form of LUNs. When a storage device receives a request to read or write data, it will check its access list to establish whether the node, identified by its LUN, is allowed to access the storage area, also identified by a LUN.[3]:148–149 LUN masking is a technique whereby the host bus adapter and the SAN software of a server restrict the LUNs for which commands are accepted. In doing so LUNs that should in any case not be accessed by the server are masked.[3]:354 Another method to restrict server access to particular SAN storage devices is fabric-based access control, or zoning, which has to be implemented on the SAN networking devices and the servers. Thereby server access is restricted to storage devices that are in a particular SAN zone.[4]
SAN Network Protocols
Most storage networks use the SCSI protocol for communication between servers and disk drive devices. A mapping layer to other protocols is used to form a network:
ATA over Ethernet (AoE), mapping of ATA over Ethernet
Fibre Channel Protocol (FCP), the most prominent one, is a mapping of SCSI over Fibre Channel
Fibre Channel over Ethernet (FCoE)
ESCON over Fibre Channel (FICON), used by mainframe computers
HyperSCSI, mapping of SCSI over Ethernet
iFCP[5] or SANoIP[6] mapping of FCP over IP
iSCSI, mapping of SCSI over TCP/IP
iSCSI Extensions for RDMA (iSER), mapping of iSCSI over InfiniBand
SCSI RDMA Protocol (SRP), another SCSI implementation for RDMA transports
Storage networks may also be built using SAS and SATA technologies. SAS evolved from SCSI direct-attached storage. SATA evolved from IDE direct-attached storage. SAS and SATA devices can be networked using SAS Expanders.
Examples of stacked protocols using SCSI:
Applications | ||||||
SCSI Layer | ||||||
FCP | FCP | FCP | FCP | iSCSI | iSER | SRP |
FCIP | iFCP | |||||
TCP | RDMA Transport | |||||
FCoE | IP | IP or InfiniBand Network | ||||
FC | Ethernet | Ethernet or InfiniBand Link |
SAN software
A SAN is primarily defined as a special purpose network, the Storage Networking Industry Association (SNIA) defines a SAN as "a network whose primary purpose is the transfer of data between computer systems and storage elements". But a SAN does not just consist of a communication infrastructure, it also has a software management layer. This software organizes the servers, storage devices, and the network so that data can be transferred and stored. Because a SAN is not a direct attached storage (DAS), the storage devices in the SAN are not owned and managed by a server.[1]:11 Potentially the data storage capacity that can be accessed by a single server through a SAN is infinite, and this storage capacity may also be accessible by other servers.[1]:12 Moreover, SAN software must ensure that data is directly moved between storage devices within the SAN, with minimal server intervention.[1]:13
SAN management software is installed on one or more servers and management clients on the storage devices. Two approaches have developed to SAN management software: in-band management means that management data between server and storage devices is transmitted on the same network as the storage data. While out-of-band management means that management data is transmitted over dedicated links.[1]:174 SAN management software will collect management data from all storage devices in the storage layer, including info on read and write failure, storage capacity bottlenecks and failure of storage devices. SAN management software may integrate with the Simple Network Management Protocol (SNMP).[1]:176
In 1999 an open standard was introduced for managing storage devices and provide interoperability, the Common Information Model (CIM). The web-based version of CIM is called Web-Based Enterprise Management (WBEM) and defines SAN storage device objects and process transactions. Use of these protocols involves a CIM object manager (CIMOM), to manage objects and interactions, and allows for the central management of SAN storage devices. Basic device management for SANs can also be achieved through the Storage Management Interface Specification (SMI-S), were CIM objects and processes are registered in a directory. Software applications and subsystems can then draw on this directory.[1]:177 Management software applications are also available to configure SAN storage devices, allowing, for example, the configuration of zones and logical unit numbers (LUNs).[1]:178
Ultimately SAN networking and storage devices are available from many vendors. Every SAN vendor has its own management and configuration software. Common management in SANs that include devices from different vendors is only possible if vendors make the application programming interface (API) for their devices available to other vendors. In such cases, upper-level SAN management software can manage the SAN devices from other vendors.[1]:180
SAN filesystems
In a SAN data is transferred, stored and accessed on a block level. As such a SAN does not provide data file abstraction, only block-level storage and operations. But file systems have been developed to work with SAN software to provide file-level access. These are known as SAN file systems, or shared disk file system.[7]
Server operating systems maintain their own file systems on their own dedicated, non-shared LUNs, as though they were local to themselves. If multiple systems were simply to attempt to share a LUN, these would interfere with each other and quickly corrupt the data. Any planned sharing of data on different computers within a LUN requires software, such as SAN file systems or clustered computing.
In media and entertainment
Video editing systems require very high data transfer rates and very low latency.
SANs in media and entertainment are often referred to as serverless due to the nature of the configuration which places the video workflow (ingest, editing, playout) desktop clients directly on the SAN rather than attaching to servers. Control of data flow is managed by a distributed file system such as StorNext by Quantum.[8] Per-node bandwidth usage control, sometimes referred to as quality of service (QoS), is especially important in video editing as it ensures fair and prioritized bandwidth usage across the network.
Quality of service
SAN Storage QoS enables the desired storage performance to be calculated and maintained for network customers accessing the device.
Some factors that affect SAN QoS are:
Bandwidth – The rate of data throughput available on the system.
Latency – The time delay for a read/write operation to execute.- Queue depth – The number of outstanding operations waiting to execute to the underlying disks (traditional or solid-state drives).
QoS can be impacted in a SAN storage system by an unexpected increase in data traffic (usage spike) from one network user that can cause performance to decrease for other users on the same network. This can be known as the “noisy neighbor effect.” When QoS services are enabled in a SAN storage system, the “noisy neighbor effect” can be prevented and network storage performance can be accurately predicted.
Using SAN storage QoS is in contrast to using disk over-provisioning in a SAN environment. Over-provisioning can be used to provide additional capacity to compensate for peak network traffic loads. However, where network loads are not predictable, over-provisioning can eventually cause all bandwidth to be fully consumed and latency to increase significantly resulting in SAN performance degradation.
Storage virtualization
Storage virtualization is the process of abstracting logical storage from physical storage. The physical storage resources are aggregated into storage pools, from which the logical storage is created. It presents to the user a logical space for data storage and transparently handles the process of mapping it to the physical location, a concept called location transparency. This is implemented in modern disk arrays, often using vendor proprietary technology. However, the goal of storage virtualization is to group multiple disk arrays from different vendors, scattered over a network, into a single storage device. The single storage device can then be managed uniformly.[citation needed]
See also
ATA over Ethernet (AoE)
Direct-attached storage (DAS)- Disk array
- Fibre Channel
- Fibre Channel over Ethernet
- File Area Network
Host Bus Adapter (HBA)- iSCSI
- iSCSI Extensions for RDMA
- List of networked storage hardware platforms
- List of storage area network management systems
Massive array of idle disks (MAID)
Network-attached storage (NAS)
Redundant array of independent disks (RAID)
SCSI RDMA Protocol (SRP)
Storage Management Initiative – Specification — (SMI-S)- Storage hypervisor
Storage Resource Management (SRM)- Storage virtualization
- System area network
References
^ abcdefghi Jon Tate, Pall Beck, Hector Hugo Ibarra, Shanmuganathan Kumaravel & Libor Miklas (2017). "Introduction to Storage Area Networks" (PDF). Red Books, IBM.CS1 maint: Uses authors parameter (link) .mw-parser-output cite.citationfont-style:inherit.mw-parser-output qquotes:"""""""'""'".mw-parser-output code.cs1-codecolor:inherit;background:inherit;border:inherit;padding:inherit.mw-parser-output .cs1-lock-free abackground:url("//upload.wikimedia.org/wikipedia/commons/thumb/6/65/Lock-green.svg/9px-Lock-green.svg.png")no-repeat;background-position:right .1em center.mw-parser-output .cs1-lock-limited a,.mw-parser-output .cs1-lock-registration abackground:url("//upload.wikimedia.org/wikipedia/commons/thumb/d/d6/Lock-gray-alt-2.svg/9px-Lock-gray-alt-2.svg.png")no-repeat;background-position:right .1em center.mw-parser-output .cs1-lock-subscription abackground:url("//upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Lock-red-alt-2.svg/9px-Lock-red-alt-2.svg.png")no-repeat;background-position:right .1em center.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registrationcolor:#555.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration spanborder-bottom:1px dotted;cursor:help.mw-parser-output .cs1-hidden-errordisplay:none;font-size:100%.mw-parser-output .cs1-visible-errorfont-size:100%.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-formatfont-size:95%.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-leftpadding-left:0.2em.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-rightpadding-right:0.2em
^ abcdefghi NIIT (2002). Special Edition: Using Storage Area Networks. Que Publishing. ISBN 9780789725745.CS1 maint: Uses authors parameter (link)
^ abcdefghijklmn Christopher Poelker & Alex Nikitin, eds. (2009). Storage Area Networks For Dummies. John Wiley & Sons. ISBN 9780470471340.CS1 maint: Uses editors parameter (link)
^ Richard Barker & Paul Massiglia (2002). Storage Area Network Essentials: A Complete Guide to Understanding and Implementing SANs. John Wiley & Sons. p. 198. ISBN 9780471267119.CS1 maint: Uses authors parameter (link)
^ "TechEncyclopedia: IP Storage". Retrieved 2007-12-09.
^ "TechEncyclopedia: SANoIP". Retrieved 2007-12-09.
^ A. Bia, A. Rabasa & C. A. Brebbia, eds. (2013). Data Management and Security: Applications in Medicine, Sciences, and Engineering. WIT Press. p. 63. ISBN 9781845647087.CS1 maint: Uses editors parameter (link)
^ "StorNext Storage Manager - High-speed file sharing, Data Management, and Digital Archiving Software". Quantum.com. Retrieved 2013-07-08.
External links
Introduction to Storage Area Networks Exhaustive Introduction into SAN, IBM Redbook- SAN vs. DAS: A Cost Analysis of Storage in the Enterprise
- SAS and SATA, solid-state storage lower data center power consumption
- SAN NAS Videos
- Storage Area Network Info
- 20 most promising enterprise storage solution providers of 2018