Hp StorageWorks Scalable File Share User Manual Page 67

  • Download
  • Add to my manuals
  • Print
  • Page
    / 84
  • Table of contents
  • BOOKMARKS
  • Rated. / 5. Based on customer reviews
Page view 66
A HP SFS G3 Performance
A.1 Benchmark Platform
HP SFS G3, based on Lustre File System Software, is designed to provide the performance and
scalability needed for very large high-performance computing clusters. Performance data in the
first part of this appendix (sections A-1 through A-6) is based on HP SFS 3.0-0. Performance of
HP SFS G3.1-0 and HP SFS G3.2-0 is expected to be comparable to HP SFS G3.0-0.
The end-to-end I/O performance of a large cluster depends on many factors, including disk
drives, storage controllers, storage interconnects, Linux, Lustre server and client software, the
cluster interconnect network, server and client hardware, and finally the characteristics of the
I/O load generated by applications. A large number of parameters at various points in the I/O
path interact to determine overall throughput. Use care and caution when attempting to
extrapolate from these measurements to other cluster configurations and other workloads.
Figure A-1 shows the test platform used. Starting on the left, the head node launched the test
jobs on the client nodes, for example IOR processes under the control of mpirun. The head node
also consolidated the results from the clients.
Figure A-1 Benchmark Platform
The clients were 16 HP BL460c blades in a c7000 enclosure. Each blade had two quad-core
processors, 16 GB of memory, and a DDR IB HCA. The blades were running HP XC V4.0 BL4
software that included a Lustre 1.6.5 patchless client.
The blade enclosure included a 4X DDR IB switch module with eight uplinks. These uplinks and
the six Lustre servers were connected to a large InfiniBand switch (Voltaire 2012). The Lustre
servers used ConnectX HCAs. This fabric minimized any InfiniBand bottlenecks in our tests.
The Lustre servers were DL380 G5s with two quad-core processors and 16 GB of memory, running
RHEL v5.1. These servers were configured in failover pairs using Heartbeat v2. Each server could
see its own storage and that of its failover mate, but mounted only its own storage until failover.
A.1 Benchmark Platform 67
Page view 66
1 2 ... 62 63 64 65 66 67 68 69 70 71 72 ... 83 84

Comments to this Manuals

No comments