The following paragraphs briefly document the evolution of the network environment of HKUST which ITSC keeps on re-engineering to cope with the changing user requirements:
1988 - 1993: From Ethernet to Shared FDDI Backbone
In 1988, while the Clear Water Bay campus
was under construction, the University had a temporary planning
office in downtown Tsim Sha Tsui. A simple 10 Mbps Ethernet
backbone was built to satisfy the relatively few simple office
automation needs. Ethernet technology was chosen for its
simplicity and flexibility. Ethernet cables are able to support a
moderately long distance, and have high immunity to interference
and noise. In early 1991, a small FDDI (Fibre Distributed Data
Interface) dual-ring-of-trees backbone - running ten times
faster, at a speed of 100 Mbps - was put in place. This was
executed to test and plan for the actual network implementation
at the permanent campus. The pilot network was thoroughly tested
for over half a year before large scale deployment in the Phase I
campus area (which was designed principally for administration
purposes.) The FDDI backbone network was put into full-scale
production in August 1991, one month before the premier
academic semester began.
1993 - 1994 : From Shared to Switched FDDI Backbone
The Phase II extension became available for use in March 1992. Unlike Phase I, Phase II laboratories and offices were designed for teaching and research. This unique feature posed a major challenge in network capacity planning - the bandwidth requirement would likely be higher, especially in laboratory areas. In addition, a number of supercomputer-class machines were being set up, and the amount of network nodes was continually expanding. Thus, the demand on the FDDI backbone bandwidth was growing.
To better cope with the situation, and to
accommodate demanding network applications, the single shared
FDDI backbone ring gradually was migrated to a switched FDDI
backbone topology, based on two interconnected high performance
FDDI switches. These "intelligent" communications
switching devices interconnected a set of FDDI rings, and, at a
high switching speed, relayed data packets amongst the rings. At
their core was a cross-bar switch, providing a switching matrix
of many simultaneous inputs and outputs. The maximum aggregate
bandwidth was up to 3.6 Gbps. With the help of a dual-homing
technique - which provided an alternative backup path as
contingency - fault tolerance was guaranteed.
1994 - 1995 : From Bridged to Routed Network
In 1991, HKUST's campus network began with a bridged environment. Given the technical problems associated with the move to the permanent campus, setting up a bridged network (rather than a routed network) was deemed to be the simplest and easiest way to deal with network management tasks. At that time, the network employed mainly FDDI concentrators, together with FDDI-to-Ethernet bridges. Multiple bridging domains segregated the network into different areas, according to their intrinsic nature. In addition, local traffic could be contained within a domain, to achieve better network performance. This segregation also allowed ITSC to utilise some of the packet filtering capabilities of the bridge for security purposes, such as locking out student access to administrative data.
Throughout the initial stage of this
bridging approach, the network was well able to cope with demand.
There was also stability, due to the careful placement of
servers, and the deployment of bridging as well as FDDI
switching. However, with an ever-increasing number of network
nodes, a routed network was finally deemed necessary to achieve
better network robustness. Thus, by the end of 1992, ITSC decided
to migrate to a routed network, using the latest router
technology. Using direct FDDI connection, several high-end
multi-protocol routers were connected to the FDDI switches -
mainly all routed IP traffic. These routers also firewalled,
or isolated, selected areas from the main network. This was in
order to maintain a greater control on performance and
1996 - 1999 : Migration to Pervasively Switched Workgroups
In early 1996 ITSC has begun to evaluate network switches for connecting workgroup desktops. At that time we believed pervasive switching at the workgroup levels is definitely the way to go. Migration to a pervasively switched environment will provide a more robust and secure network for all workgroup desktops with better network performance and throughput. At that time 10 Mbps Ethernet switches were starting to appear. We began to deploy these switches at some of the more traffic-demanding workgroup areas like our central computing laboratories and some of the engineering departments. In the summer of 1997, the whole student hall network (called ResNet) were re-engineered and migrated to a fully switched network with over 3,100 switched 10 Mbps connections.
By the end of 1997, 10/100 Mbps Fast
Ethernet switches began to appear which offers 10 times the
performance at rougly 2-3 times the price. ITSC began to
install these newer switches discretionally at selected
high-traffic areas. As we transited into 1998, the price of
network switches continued to tumble, and starting from mid-1998,
ITSC began to buy solely 10/100 Mbps Fast Ethernet switches to
provide workgroup network connectivity. As at mid-1999,
ITSC had installed over 7,000 switched workgroup ports. The
migration is an on-going process and it is anticipated by the end
of 1999, almost all user workgroups will be migrated to a
switched environment. We then will have over 8,000 switched
ports installed in campus.
1999 - 2000 : Migration to Switched Gigabit Ethernet Backbone
Back in the 2nd half of 1998, ITSC had begun to review its future campus backbone migration strategy. At that time there were mainly two different migration paths from a legacy switched FDDI backbone - either using ATM (Asynchronous Transfer Mode) or the newer 1,000 Mbps Gigabit Ethernet technology. By comparing the relative merits of these two backbone technologies, ITSC finally, by the end of 1998, adopted a switched Gigabit Ethernet backbone migration approach for our new campus LAN (local area network). The actual migration commenced in the summer of 1999, and the whole migration was completed in April 2000.
Our 7-years-old legacy switched FDDI backbone network, with 2 FDDI switches (GIGAswitch/FDDI) and 5 high-end FDDI backbone routers (Cisco 7500 and 7000 routers), was replaced by 2 high-performance high-port-density Gigabit Ethernet switches (Cisco Catalyst 6509). These new Cisco routing switches employ latest hardware ASIC based routing technology (so-called Layer 3 Switching) and deliver an aggregate routing throughput of up to 30 millions packets per second at the backbone level, at least 30-50 times performance boost from the previous FDDI backbone. Additional redundancy of network pathways were also be introduced for better fault tolerance. At the same time most of the network pathways connecting individual workgroups (or subnets) were upgraded with at least a 10-fold speed increase. Selected high-powered backbone servers were upgraded with a Gigabit Ethernet connection. Click here to see a simplified network schematic of the new backbone.