Microsoft has concluded that underwater datacentres are a reliable, practical and energy-efficient alternative to traditional land-based server farms, following the completion of a multi-year research project on this topic.
The software giant shared news of its Project Natick underwater datacentre experiments back in 2016, with the first phase of the initiative focused on determining the feasibility of building underwater datacentres, powered by offshore renewable energy sources.
This paved the way for the second phase of the project, which saw Microsoft deploy a 40ft prototype facility, 117ft under water, off the coast of the Orkney Islands in Scotland during the spring of 2018.
The reliability of the servers contained within the Orkney subsea datacentre, named Northern Isles, were tested and monitored over the course of the past two years, as Microsoft sought to test its hypothesis that submerged server farms might be less prone to technical difficulties than land-based ones.
“The team hypothesised that a sealed container on the ocean floor could provide ways to improve the overall reliability of datacentres,” said Microsoft, in a blog post.
“On land, corrosion from oxygen and humidity, temperature fluctuations and bumps and jostles from people who replace broken components are all variables that can contribute to equipment failure. The Northern Isles deployment confirmed their hypothesis, which could have implications for datacentres on land.”
Airing out the datacentre
Ahead of the datacentre’s original submersion, the vessel was filled with dry nitrogen to create a “benign” atmosphere inside, with another element of the experiment focusing on how this might affect the operating environment of the computer equipment inside.
One of the standout findings of the experiment is that the servers running inside the underwater datacentre were eight times more reliable than those on land, and the presence of nitrogen may have played a part in this, the Project Natick team suggests, as it is a less corrosive gas than oxygen.
“Our failure rate in the water is one-eighth of what we see on land,” said Ben Cutler, a project manager in Microsoft’s Special Projects research group.
“I have an economic model that says if I lose so many servers per unit of time, I’m at least at parity with land. We are considerably better than that.”
Another notable finding from the project is that the underwater facility was able to run “really well”, said Microsoft, despite being fed renewable power from what would be considered an “unreliable grid” to power a land-based facility.
“We are hopeful that we can look at our findings and say maybe we don’t need to have quite as much infrastructure focused on power and reliability,” said Spencer Fowers, a principal member of technical staff for Microsoft’s special projects research group.
Lessons in latency
The lessons learned from Project Natick are also being used to inform Microsoft’s edge computing strategy for its Azure public cloud datacentres, so that the firm can work out where best to host “tactical and critical” workloads, the company said.
“We are populating the globe with edge devices, large and small to learn how to make datacentres reliable enough not to need human touch is a dream of ours,” said William Chappell, vice-president of mission systems for Azure, elsewhere in the blog post.
The underwater datacentre concept has previously been billed by Microsoft as a good way to regulate the temperature of datacentres in a more energy efficient way, as the underwater surroundings negates the need for expensive mechanical cooling systems.
The company has also been quick to talk up the latency benefits of offshore datacentres in the past, by offering up the statistic that with half the world’s population living within 200km of an ocean, subsea builds have the potential to significantly cut down data transfer times to user sites.