What is HPC?



High Performance Computing (HPC)  is the application of large amounts of aggregated parallel computing power to solve complex computational problems too difficult to be addressed by standard server or desktop workstations.

Classes of problems which can be addressed in High Performance Computing are difficult calculations in science, engineering and medical research.

Large clusters of machines consisting of hundreds or thousands of nodes and thousands to tens of thousands of cores are connected to a shared control and storage platform and then applied in parallel to the HPC task. Large HPC clusters are often described as Supercomputers – the largest in the world at June 2018 being the Summit Suport computer at the US DOE’s Oak Ridge National Laboratory with a performance of 122.3 petaflops across 4,356 nodes.

Typical HPC problems are weather forecasting which involves large amounts of interconnected data cells, finite element analysis of vehicle crash simulation (pioneered in the 80s by manufacturers such as BMW and Volkswagen), drug development and discovery and genomics.

Manufacturers known for specialising in HPC include IBM Cray, HP and Fujitsu.  

High Performance Computing optimised data centers need to be designed to cope with greater than normal power density, which requires sufficient cooling, and the interconnection at high speed of the HPC components for data replication.

Minimising Data Center costs for High Performance Computing

HPC (High Performance Computing) is a complex subject with many definitions. Clear up the confusion with this high level overview. Learn about the importance of matching a high efficiency data center environment to your HPC hardware and applications.