petaLibrary Architecture



Overall Architectural Diagram

click to enlarge:

GridNAS

The GridNAS servers are a customer's point of access to the petaLibrary. These systems serve up the petaLibrary file system via SMB/CIFS/NFS. The filesystem's volumes are mounted using the GPFS protocol over the Infiniband network fabric.

Tier 1 Storage

Tier 1 storage is comprised of a GPFS filesystem with a DDN GridScaler 7K with 5 SS8460 expansion chassis. Together they are capable of a maximum throughput of 12 GB/s. The GridScaler 7k is connected to the campus core at 40 gbps and to the GridNAS and various ARCC servers over FDR Infiniband.


The SS8460 Expansion Chassis are JBOD (Just a bunch of disk) devices that are connected to the GridScaler 7k with multiple 6 GB/s SAS connections. The disks are placed into sets of 10 raid-6 volumes that are then striped across to comprise the petaLibrary filesystem.

Tier 2 Storage

The DDN WOS 7K is used in the petaLibrary as an archive tier. It is a high performance object storage device that is connected to the network at 10 gbps. Each device gives 360 TB usable capacity. Access to this tier is handled through GridNAS and the Data Archive Software and is transparent to end users.


Data Archive Software

QStar Archive Manager and Data Migrator are used to manage the data on the petaLibrary. Together, they transfer data between Tier 1 and Tier 2 storage systems. The Archive Manager also provides 'stubbing' for any data that has been archived. This effectively means end users can see their entire file system as a single volume even though it lives on multiple tiers. The data migrator node is responsible for transferring data to and from the archive as it is accessed. These systems are connected at 10 gbps and can handle several petabytes. To scale for capacity, performance, or redundancy multiple migrator/archive nodes can be installed. This software is also capable of integrating other storage tiers, such as a tape library to the current architecture.

Data Transfer Nodes

The Data Transfer Nodes for the petaLibrary project are Supermicro servers connected to the Science DMZ @ 10 gigabit and connected to the petaLibrary via FDR infiniband. Their purpose is to allow for high-speed, "effortless" transfers of data between endpoints.



Networking

Two Mellanox SX1024 switches will provide 10/40 gbps ethernet for the project. Each switch contains 48 SFP+ ports running at 10 gbps and 12 QSFP ports at 40 gpbs. Ethernet will provide most network connectivity for the project.





Two Mellanox SX6012 switches will provide 56 gbps FDR Infiniband for the project. Each switch contains 12 ports. Infiniband will provide connectivity to ARCC services such as Globus endpoints and Mount Moran.