Scroll Top

Microsoft is turning Azure into the world’s largest supercomputer

article_msfthpc

Microsoft has been toying around with FPGAs and now they’re on track to turn Azure into the world’s biggest supercomputer

Microsoft is embarking on a major upgrade of its Azure cloud platform and while the new hardware that the company is installing in its 34 datacenters around the world will still contain the usual mix of processors, RAM, storage and networking hardware Microsoft are planning on adding something new into every server it installs  -Intel Altera Field Programmable Gate Arrays, better known as FPGAs.

FPGAs are highly configurable processors that can be rewired using software in order to optimise and improve the performance of software algorithms and the new FPGA network will be used to accelerate artificial intelligence services, and many more things besides. Microsoft says the current rollout is happening in 15 countries over five continents and though there is no detail available it’s likely that newer data centres, such as those recently opened in the UK, are already using the technology.

 

See also
AI just discovered another 200 CRISPR-like gene editing systems

 

Microsoft first investigated using FPGAs to accelerate the Bing search engine. In “Project Catapult,” Microsoft added off the shelf FPGAs on PCIe cards to some Bing servers and programmed those FPGAs to perform parts of the Bing ranking algorithm in hardware. The result was a 40 fold speed increase compared to a software implementation running on a regular CPU.

A common next step after achieving success with an FPGA is to then create an Application Specific Integrated Circuit (ASIC) to make a dedicated, hardcoded equivalent to the FPGA. This is what Microsoft did with the Holographic Processing Unit in its HoloLens headset, for example, because the ASIC has greatly reduced power consumption and size. But the Bing team stuck with FPGAs because their algorithms change dozens of times a year and an ASIC can take months to produce, meaning that by the time it arrived, it would already be obsolete.

With the pilot program behind them, the company then looked at ways to use FPGAs more widely across not just Bing but Azure and Office 365 as well but, according to Azure CTO Mark Russinovich Azure’s acute problem wasn’t search engine algorithms it was network traffic.

Azure runs a vast number of virtual machines across hundreds of thousands of physical servers around the world using a close relative of the commercial version of Microsoft’s Hyper-V hypervisor to manage the virtualization of the physical hardware. Virtual machines have one or more virtual network adaptors through which they send and receive network traffic and the hypervisor handles the physical network adaptors that are actually connected to Azure’s networking infrastructure. Moving traffic to and from the virtual adaptors, while simultaneously applying rules to handle load-balancing, traffic routing, and so on, incurs a CPU cost.

 

See also
Researchers create a 5D storage crystal that outlasts the Universe

 

This cost is negligible at low data rates, but when pushing multiple gigabits of traffic per second to a VM, Microsoft was finding that the CPU burden was considerable. Entire processor cores had to be dedicated to this network workload, meaning that they couldn’t be used to run VMs.

The Bing team didn’t have this network use case, so the way they used FPGAs wasn’t immediately suitable. The solution that Microsoft came up with to handle both Bing and Azure was to add a second connection to the FPGAs. In addition to their PCIe connection to each hardware server, the FPGA boards were also plumbed directly into the Azure network, enabling them to send and receive network traffic without needing to pass it through the host system’s network interface.

That PCIe interface is shared with the virtual machines directly, giving the virtual machines the power to send and receive network traffic without having to bounce it through the host first. The result? Azure virtual machines can push 25 gigabits per second of network traffic, with a latency of 100 microseconds. That’s a tenfold latency improvement and can be done without demanding any host CPU resources.

Technically, something similar could be achieved with a suitable network card. Higher-end network cards can similarly be shared and directly assigned to virtual machines, allowing the host to be bypassed but this process is usually subject to limitations (for example, a given card may only be assignable to 4 VMs simultaneously), and it doesn’t offer the same flexibility. When programming the FPGAs for Azure’s networking, Microsoft can build the load balancing and other rules directly into the FPGA. With a shared network card, these rules would still have to be handled on the host processor within a device driver.

 

See also
Georgia Tech secures grant to smash an Exabyte of data into a DNA sugarcube

 

This flexibility means that for Azure, just as with Bing before it, FPGAs are a better solution than ASICs.

This solution of network connected FPGAs works as well for Azure as it does for Bing, and Microsoft is now rolling it out to all of its data centers. Bing can use the FPGAs for workloads such as ranking, feature extraction from photos, and machine learning and Azure can use them for accelerated networking.

Currently, the FPGA networking is in preview; once it’s widespread enough across Microsoft’s datacenters, it will become standard, with the long-term goal being to use FPGA networking for every Azure virtual machine.

Networking is the first workload in Azure, but it’s not going to be the only one. In principle, Microsoft could offer a menu of FPGA-accelerated algorithms (pattern matching, machine learning, and certain kinds of large-scale number crunching would all be good candidates) that virtual machine users could opt into, and longer term custom programming of the FPGAs could be an option.

Microsoft gave a demonstration at its Ignite conference this week of just what this power could be used for.

Distinguished engineer Doug Burger, who had led the Project Catapult work, demonstrated a machine translation of all 3 billion words of English Wikipedia on thousands of FPGAs simultaneously, crunching through the entire set of data in just a tenth of a second. The total processing power of all those FPGAs together was estimated at about 1 exa-operations—one billion billion, 1018 operations—per second, working on an artificial intelligence-style workload.

While not a primary target for the company, the scale of cloud compute power, combined with the increasingly high performance interconnects that FPGAs and other technology enable, mean that cloud-scale systems are in time likely to be competitive with and eventually surpass the supercomputers used for high-performance computing workloads. With this kind of processing power on tap, it won’t be too long before Azure becomes a collection of supercomputers available to anyone to rent by the hour.

Related Posts

Leave a comment

FREE! 2024 TRENDS AND EMERGING TECHNOLOGY CODEXES
+

Awesome! You're now subscribed.

Pin It on Pinterest

Share This