At last month’s SC19 Super Computing Conference in Denver, Colorado NVIDIA CEO Jensen Huang’s special address highlighted the impact that AI and GPU accelerated HPC are having on Industry and Research.
Innovation in software and hardware is enabling new advances in areas as diverse as Genomics, Meteorology, Pathology and Nuclear Waste Remediation. Improvements in the deep learning software stack have resulted in 3x performance increase in just 2 years and NVIDIA now powers a record 136 of the top 500 supercomputers in the world.
Where Edge and Cloud Combine Forces
Huang described his concept of ‘Streaming AI’ where data from trillions of sensors and billions of cameras worldwide that simply cannot all be stored is processed at the edge in real time. However, this will be accomplished using models that are trained and developed by supercomputers located in centralised data centers.
Ever more powerful Deep Learning servers such as the DGX-1 and DGX-2 leverage their tightly interconnected architecture and the power of Tesla V100 to deliver huge increases in performance. But these servers demand reliable hosting environments with predictable power and cooling, very fast local storage arrays and excellent external connectivity. These requirements are typically met by multi-MW data centers.
H66 and HPC as a Service
H66 joined NVIDIA’s Inception programme in 2019, and during 2020 we plan to work with partners to deliver HPC Deep Learning Infrastructure as a Service to Enterprise. We are excited by the potential embedded in these revolutionary platforms and how they can be leveraged from our ultra low emissions green data center.