NAStronaut

SHARED STORAGE EXPLAINED
 

INTERNAL STORAGE

 

A single fast hard disk or SSD in a computer is a powerful editing and content creation platform. However, it has inherent shortcomings:

  • Speed is capped
  • Capacity is finite and can be increased either at the expense of performance or with significant time spent re-building data
  • Even in RAID 1 configuration, 2 or 4 disks do not protect data: errors can cause considerable downtime or total data loss
  • Sharing data with others is too slow to support multi-stream HD editing
     
 
 

EXTERNAL STORAGE

 

Attaching RAID arrays to individual workstations is unacceptably expensive and does not lend itself well to collaborative work and sharing assets. In fact in most situations it leads to duplicate files and long wasteful hours copying and moving data between workstations.

These factors underline the benefits of centralized storage for larger workgroups.
 

 
 

CENTRAL STORAGE

 

The growth of shared storage answers the demand for safe data storage that can be shared out between users throughout a facility allowing for faster workflows and more creative results. The other significant benefit is that media asset management (MAM) and data archiving can all be managed efficiently from a central area.

Data stored in fast centralized multi-disk arrays and protected by RAID 5 can be safely and quickly shared over a network. Arrays with 8 or more disks are easy to administer and maintain without forcing users to sit idle for hours. With fast and robust storage using standard Ethernet protocols, users can start sharing files and working collaboratively on creative projects.

 

 

SAN (STORAGE AREA NETWORK)

 

The traditional route to sharing fast central storage between creative workstations, SAN used to be the only choice when it came to the high bandwidth demands of the media industry. The fragility and complexity of SANs was accepted as a necessary evil and many systems integrators spent time and effort learning how to install and support these kind of deployments. A badly installed SAN is hugely problematic; users can lose data and hours of time whilst remedial work is carried out.

The storage is shared at block level to match the functionality of local storage, but to achieve this across all workstations the access must be managed. This necessitates a Meta Data management layer, which is typically run across Ethernet and managed by one or more Meta Data Controllers (MDC.)

Licensed client software must be installed on each workstation and every MDC, which adds cost and inconvenience: software versions must be matched and in many cases are OS dependant meaning that policing of OS revisions is essential or workstations will simply not see the storage. SANs are not usually flexible with different client computers, so mixing PC, Mac and Linux workstations on a network is problematic and stops true collaboration.

Unless the in-house IT support team are experienced with SAN and are familiar with its various layers of intricacy and fragility, a support contract with a technical specialist is vital.

It should also be noted that future capacity expansions or regular maintenance sessions can require significant downtime, disrupting ongoing work. A backup system is vital although it is not always feasible to fully duplicate an entire SAN. Often users tend to back up their files to an LTO library or an office NAS solution, meaning that data recovery can be laborious.
 

 
 

VOLUME LEVEL & FILE LEVEL SAN

 

First generation SANs were generally ‘Volume level’ which meant that they offered multiple volumes to client computers, usually one per workstation. This would generally mean that users had read and write access to one volume and read-only access to every other volume on the SAN. This crude management architecture was not popular with users and in many cases entailed lengthy copying times and inconvenient workflows. File Level SANs came later and were able to offer a single volume with read and write access to all clients. This delivered a more efficient workflow.

 

 
 

FIBRE CHANNEL

 

Fibre Channel is a common connectivity. It sends Fibre Channel protocols between targets (storage) and Initiators (clients/MDCs) via optical cables, in conjunction with SFP transceivers, Fibre Channel PCI cards and Fibre Channel switches. A separate Meta Data management layer is also necessary and is run over Ethernet.

By the time the client software, cables, transceivers, Fibre and Ethernet networks are accounted for, it is clear that Fibre Channel SANs are expensive and do not scale very well; more clients means more expensive kit and the introduction of bottlenecks using uplinked Fibre switches. As data is added, greater network traffic leads to heavily contended MetaData networks and instability.
 

 
 

iSCSi

 

iSCSi is another type of SAN which sends SCSi protocols between targets and initiators using Ethernet connections. The downside here is that the separate metadata layer and the storage connection are combined in one cable therefore reducing the available performance from each Ethernet connection.

iSCSi SANs also suffer from poor scalability and struggle to handle many connections to a single volume. Therefore manufacturers that promote iSCSi SANs tend to recommend using multiple volumes, which starts to create similar problems to a Volume Level SAN, and a poor user experience.

Many believe that iSCSi is the most effective way of using Ethernet for high bandwidth usage. GB Labs’ Space systems prove that NAS can easily outperform iSCSi. 
 

 
 

NAS (NETWORK AREA STORAGE)

 

Across GB Labs’ entire product range, we employ network attached storage (NAS). Based on standard global IT protocols (NFS or SMB/CIFS), NAS connects to users over Ethernet. Running at Gigabit and 10 Gigabit and 40 Gigabit speeds, very high data throughputs can be achieved; these can be further increased with channel bonding.

Because NAS runs on Cat5e and Cat6 networks, cabling, routers and switches are inexpensive and readily available. Moreover, many organizations can use existing networking equipment rather than having to purchase all-new kit for a new storage installation. 

Other important advantages include:

  • No client software or licenses
  • Cross platform sharing
  • Compatibility with most in-house systems
  • Powerful user and group management
  • Easy management
  • Automated Replication
  • Scalable user counts
  • Add further storage units instantly 
  • Local or remote data access
  • Local or remote data technical support


Is a tier 1 NAS fast enough to act as a shared editing environment? 

Historically most NAS solutions were built for office and general IT users and were not designed to offer the sheer speed needed for video, graphics and content creation. With GB Labs' Space systems, NAS has been re-defined, built for extreme performance with every component tested, re-engineered and optimized and the OS perfected for outright media performance. Now it outcompetes virtually everything on the market for speed and cost of ownership.

GB Labs produce the highest performance NAS hardware on the market that will saturate 1Gbe and 10Gbe connections with sustained data rates suitable for creative editing systems and very demanding media workflows.

The question of architecture then becomes less relevant than the question of the storage behind it.

Unlike other NAS products Space offers:

  • Dynamic capacity expansion
  • Scalable performance
  • Tiered storage with intelligent automation
  • Intuitive user interfaces
  • Integration with MAM systems and video project sharing tools