The growing trend in cloud computing is rising the need for building massive data centers capable of holding hundreds to thousands of servers that sustain and support a variety of services (e.g web-serach, video content hosting and distribution, social networking and large-scale computations). The interconnection network infrastructure of the data center is key to its cost, performance and feasibility. Several data center interconnection architectures exist that are based on a tree structure built of top of rack (ToR) switches interconnected through end of rack (EoR) switches, and these are being connected through core switches. Independent of the interconnection topology of the data center network, high-performance switches and routes are the key building block of its infrastructure and, therefore, the capability of the data center in all its aspects largely depends on the capability of its switches and routers. While high-performance switch design has been studied in the context of data networking and telecom areas, little research has been conducted into designing switching architectures for data center networks.
The objective of this research is to explore this relatively untapped area; that is designing switching architectures for data center infrastructure capable of handling the projected growth of data centers. This research will focus on the following themes:
1) Scalable switching architectures for data centers: Design scalable, practical, high-radix switch topologies that have low latency, performance guarantees and low energy consumption.
2) Switch Resource allocation: Propose scheduling and resources allocation algorithms with controllable latencies and guaranteed bandwidth.
3) Flow control: Describe appropriate flow control algorithms to control the flow of packets through the switch stages.
4)Performance evaluation: Evaluation the proposed topologies, algorithms and methodologies algorithms and methodologies for various data center applications.
Call for proposal
See other projects for this call