OpenFlow and Multipath TCP for Data Intensive Science
While full use of 100 Gbps links was being shown on dedicated links, the Caltech team at SC12 also used software defined networks and multipath protocols to overcome the limitations faced by most data intensive science projects, who use shared network infrastructures or dedicated links of 10 Gbps or lower speeds.
Researchers from iCAIR, SARA, SURFnet, and Caltech demonstrated the utilization of MultiPath TCP (MPTCP) and OpenFlow to address these needs. A dynamic multipath switching fabric based on the Floodlight OpenFlow controller and Pronto OpenFlow switches in Salt Lake City, Chicago, Amsterdam and Geneva was used to create a set of link-disjoint paths across an intercontinental network. Corresponding OpenFlow forwarding entries were automatically pushed to the OpenFlow switches, whenever a new flow appeared at the ingress of the OpenFlow network. In addition, MPTCP was used to split a TCP data stream into parallel TCP sub-streams from source to destination. Using multiple paths simultaneously, an aggregate data transfer rate was reached that exceeded the maximum capacity of a single link, namely 14 Gbps on two 10 Gbps paths between Geneva and Amsterdam. The team also filled four disjoint paths from Geneva to the show floor in Salt Lake City with capacities of 5 Gbps to 10 Gbps each.
The next round of tests will target state of the art throughput across shared R&E networks, with dynamic reconfiguration of the network using Openflow triggered either by degradation of a network segment, or shifting loads from competing flows.