Detailed report for UCSD
Haifeng Pi (firstname.lastname@example.org)
During the initial tests in SC, a
throughput of ~6Gb/s from UCSD to SC10 (disk-to-disk
transfer) was achieved by using 12 servers at UCSD and 4
servers at SC10. Each SC10 server is able to sink 2Gb/s. The
memory to memory transfer confirmed that max-throughput of
UCSD-to-SC10 is 7.5 Gb/s.
It is unclear whether we can do 7.5Gb/s disk-to-disk from UCSD to SC10. We probably need to add at least one more server on SC10. Currently we are running at 3 servers (UCSD) to 1 server (SC10). Proportionally we will add 3 more servers at UCSD.
The reverse direction (from SC10-to-UCSD) only gets 3-3.5 Gb/s for today (disk-to-disk). We will do a memory to memory test to confirms if it is limited by networking.
In the last two hours of SC10 we
finally accomplished our mission.
1. Adding 12 more hosts at UCSD for data transfer from UCSD to SC10, which provides 7-8 Gb/s throughput to 3 hosts at SC10 via ESnet.
2. original 12 hosts at UCSD are used for traffic from SC10 to UCSD, which provides 4-5 Gb/s throughput with 4 hosts at SC10. This traffic actually use CENIC.
Overall, we have 24 hosts at UCSD and 7 hosts at SC10 to make 12-14 Gb/s throughput.
It is still under investigation why UCSD->SC10 is always 50+% efficient than that of the reverse direction.