Our speaker this week, who wished to remain anonymous due to commercial sensitivities, addressed us directly – courtesy modern technology – from a location in the western US. He is currently working as a Construction Manager with one of the World ‘Premier League’ (Amazon, Facebook, IBM, Microsoft, etc.) of ‘High Tec’ Corporations, commissioning a new site in a remote location. He is an expert in the design and implementation of large scale projects, who had previously spoken to us about how underwater cables transmitted 90% of internet traffic. After leaving a local Comprehensive School, our speaker won a scholarship to study Construction at a University in Greater Manchester. He went on to work on such developments as the Manchester Arena, major hospitals and a BT data centre on the Isle of Man.
Many of us recall putting coins in a telephone box, with mobile phones not common before 1990. The typing pool had not yet been replaced by the computer or word processor. While our level of expertise varies, most of us take electronic communications for granted these days. The World Wide Web has transformed the way we do business and keep in touch. The current ‘Covid lockdown’ has boosted usage and caused disruption to traditional ways of purchasing goods and services. Family meetings, University lectures, Church services, and even Probus talks like this one, are now routinely ‘streamed’ or ‘zoomed’. The pace and capacity of change is speeding up and has become dominated by large, mostly US and Chinese based organisations.
Having set the scene, our speaker was to open our eyes to the infrastructure and equipment ‘architecture’ required to support and service these developments. He divided his illustrated talk into four sections:
What is a Hyperscale Data Center (HDC)?
Traditional data centres are centralised facilities that house organisations’ critical data and applications. They use computing, networking systems and equipment to store data and enable users’ access to resources. HDCs are significantly larger, to the extent that they can accommodate millions of servers and more virtual machines. Hyperscaling is necessary for ‘Cloud’ and large scale provision, while being more cost effective and improving business operations. It allows flexible expansion to meet organisations’ growing internet, data storage and computing networks, without requiring additional cooling, electrical power or physical space.
An HDC houses networking servers horizontally, enabling them to quickly and simply be added or removed as capacity demands increase and decrease. A load balancer manages this process by monitoring the amount of data that needs to be processed. Customers are allocated rented capacity, regulated either through their kilowatt consumption or the number of ‘storage racks’ they require.
Where are HDCs built?
HDCs facilities require at least 10,000 square feet to house 5000+ servers that run on ultra- high speed fibre networks, but many are much larger. Our speaker’s site requires 152K sq ft (about 350 acres) but is dwarfed by one in Texas, of 2.5 million sq ft . Such sites are unlikely to be found within cities due to planning restrictions. They are increasingly being located in run down or undeveloped rural areas where there are few job opportunities. Sites should ideally be well away from seismic, flooding and extreme weather threats. For these reasons, HDCs are not freestanding but typically linked to a network of five or six other HDCs in different locations, to provide both security and mutual backup. There are likely to be around 700 such sites worldwide by 2024.
How are HDC’s built?
To give us an impression of the major project in which he has a vital role, our speaker showed a video of an HDC site under construction. Contractor and sub-contractors and their men and equipment were seen moving in sequence from ground clearance and groundworks to roofing. We saw trenching and the installation of storm drains, service ducting, laying of reinforced concrete foundations to erecting steel (sourced from several suppliers) roof supports and finally covering. The next stage was the fitting and kitting of electrical equipment and building services.
The typical construction time to commissioning is about 18-24 months, requiring a large initial labour and plant equipment input. The Permanent on site workforce will, however, be small, perhaps 100 managers and technicians. Such remote siting needs new roads, but easing access may bring security concerns. Protective measures such as deep burial in concrete of fibre and electricity cables and security fencing and entrance restrictions are specified. Electricity will be supplied from at least two grids, from which power can be ‘stepped down’. Cooling power, assisted by underfloor ventilation, is concentrated towards servers that host high-intensity workloads and air flow optimised to reclaim and recycle heat.
The Changing Landscape
Our Speaker concluded his presentation by inviting us to move from present construction to a peep into the future. Where could we be in forty years’ time? It’s beyond imagination! Did anyone anticipate forty years ago that today they could see (in colour) and chat to their family in Australia on their tablet or mobile? Communication technology is speeding up and widening its applications: such fields as litigation, medical science and biotechnology are opening up and networks will both respond to and stimulate developments. He predicted that there will be a massive growth in ‘cloud’ back up/storage facilities. There will be a need for both HDCs and smaller facilities, made possible by the development of micro equipment, and a requirement for relevant technical education if economies are to remain competitive. To meet local or specialist needs smaller facilities such as those in the UK at Slough and Waltham Cross will have to be built. Larger schemes especially, will need to reflect growing environmental concerns i.e., placed near hydro/solar/wind facilities or even canals and sewerage farms, where dirty or ‘grey’water can be recycled after use as a coolant.
Our Speaker was warmly thanked for his fascinating presentation.