With three 100 gigabit/sec (100 Gbps) wide area network circuits set up by the SCinet, Internet2, CENIC, CANARIE and BCnet, Starlight and US LHCNet network teams, and servers at each of the sites with 40 gigabit Ethernet (40GE) interfaces, the team reached a record transfer rate of 339 Gbps between Caltech, the University of Victoria Computing Center in British Columbia, the University of Michigan, and the Salt Palace Convention Center in Utah. This nearly doubled last year's overall record, and eclipsed the record for a bidirectional transfer on a single link with a data flow of 187 Gbps between Victoria and Salt Lake.
Several other records achieved November 14-15 included: a record overall storage to storage rate using the three links of 187 Gbps, a unidirectional transfer between storage systems in Victoria and Salt Lake on one link of 96 Gbps, an 80 Gbps transfer from Caltech to a single server with two 40GE interfaces at Salt Lake with nearly 100% use of the servers' interfaces at both ends, and a transfer using Remote Data Memory Access (RDMA) over Ethernet between Pasadena and Salt Lake that sustained 75 Gbps with a CPU load on the servers of only 5%. A total of 3.8 Petabytes was transferred over the three days of the conference exhibit, including 2 Petabytes on the last day.
The latest generation of servers based on the PCI Express 3.0 standard and equipped with line-rate 40GE interface cards as well as RAID arrays with solid state disks (SSDs) allowed the team to reach a stable throughput of 38 Gbps from disk to disk over long distances between a pair of two-rack-unit (2U) servers. This tripled the throughput per server achieved with earlier versions of these systems in 2011. Servers equipped with solid state storage cards and two network interfaces reached very close to 80 Gbps in a single server.