40GE FAST Data Transfer Kit - Setup
Once we have identified the required hardware we can go ahead start building it. In our case below is the list of selected hardware for both server and the network.
- SuperMicro 2U, 16 x 2.5" disk slot Chassis
- Motherboard X9DRi/F
- 64GB (8GB x 8) DDR3 1600MHz RAM
- Dual Intel SandyBridge E5-2670 processors
- LSI MegaRaid Controllers, LSI-9265-8i with fast path license enabled
- OCZ 120GB Vertex-3 SSD drives
- Mellanox 354QCA ?? Dual port ConnectX-3 VPI NIC
- Dell-Force10 Z9000, FTOS version 8.11.3
We found that using QSFP copper cables while running link at full 40GE rate starts giving link errors as reported by the NIC. This produces negative effect for the TCP streams.
- Direct Attached QSFP Active Fiber Optic Cable, Dell-Force10
- Direct Attached QSFP Passive Copper Cable, Dell-Force10
As shown in the Intel SandyBridge architecture, there is no PCIe bridge required for connecting the PCIe lanes to the processor. Though in certain cases where effective transfer rate per lane is lower, a chipset can be used to aggregate multiple devices, e.g. on-board SATA, USB ports. Thus it is important to know that on which processor e.g. Mellanox NIC is installed and therefore proper smp affinity is applied. For smp affinity, see optimization section. SuperMicro provides motherboard layout while Dell R720xd has labels on the PCIe slots with proper identification.
Server is installed with ScientificLinux 6.2 (a derivative of RHEL 6.2) using the latest updated kernel 188.8.131.52. We used both IPERF for network testing while FDT for data transfer among the two servers. Other software stacks are needs to be updatedas below:
- Latest Firmware from mellanox.com for the 40GE VPI NIC which as of this writing is 2.10.800
- Latest Driver for the Mellanox NIC which is 1.5.8
- Mellanox firmware tools.
- LSI Driver from lsi.com which is Release 6.12
- Latest LSI firmware from lsi.com
Server Front Panel