Breaking HPC Barriers with the 56GbE Cloud

dc.contributor.authorAtif, Muhammaden
dc.contributor.authorKobayashi, Rikaen
dc.contributor.authorMenadue, Benjamin J.en
dc.contributor.authorLin, Ching Yehen
dc.contributor.authorSanderson, Matthewen
dc.contributor.authorWilliams, Allanen
dc.date.accessioned2026-01-01T08:42:31Z
dc.date.available2026-01-01T08:42:31Z
dc.date.issued2016en
dc.description.abstractWith the widespread adoption of cloud computing, high-performance computing (HPC) is no longer limited to organisations with the funds and manpower necessary to house and run a supercomputer. However, the performance of large-scale scientific applications in the cloud has in the past been constrained by latency and bandwidth. The main reasons for these constraints are the design decisions of cloud providers, primarily focusing on high-density applications such as web services and data hosting. In this paper, we provide an overview of a high performance OpenStack cloud implementation at the National Computational Infrastructure (NCI). This cloud is targeted at high-performance scientific applications, and enables scientists to build their own clusters when their demands and software stacks conflict with traditional bare-metal HPC environments. In this paper, we present the architecture of our 56 GbE cloud and a preliminary set of HPC benchmark results against the more traditional cloud and native InfiniBand HPC environments. Three different network interconnects and configurations were tested as part of the Cloud deployment. These were 10G Ethernet, 56G Fat-tree Ethernet and native FDR Full Fat-tree InfiniBand (IB). In this paper, these three solutions are discussed from the viewpoint of on-demand HPC clusters focusing on bandwidth, latency and security. A detailed analysis of these metrics in the context of micro-benchmarks and scientific applications is presented, including the affects of using TCP and RDMA on scientific applications.en
dc.description.statusPeer-revieweden
dc.format.extent9en
dc.identifier.scopus84985961587en
dc.identifier.urihttps://hdl.handle.net/1885/733799314
dc.language.isoenen
dc.relation.ispartofseries6th International Conference On Advances In Computing and Communications, ICACC 2016en
dc.rightsPublisher Copyright: © 2016 The Authors. Published by Elsevier B.V.en
dc.sourceProcedia Computer Scienceen
dc.subjectCloud Computingen
dc.subjectHigh Performance Computingen
dc.subjectHigh Performance Etherneten
dc.subjectInfiniBanden
dc.subjectRDMA over Etherneten
dc.subjectScientific Applicationsen
dc.titleBreaking HPC Barriers with the 56GbE Clouden
dc.typeConference paperen
dspace.entity.typePublicationen
local.bibliographicCitation.lastpage11en
local.bibliographicCitation.startpage3en
local.contributor.affiliationAtif, Muhammad; School of Computing, ANU College of Systems and Society, The Australian National Universityen
local.contributor.affiliationKobayashi, Rika; CLOSED Supercomputing Facility, The Australian National Universityen
local.contributor.affiliationMenadue, Benjamin J.; School of Computing, ANU College of Systems and Society, The Australian National Universityen
local.contributor.affiliationLin, Ching Yeh; Wearable and Portable Devices, Research School of Chemistry, ANU College of Science and Medicine, The Australian National Universityen
local.contributor.affiliationSanderson, Matthew; School of Computing, ANU College of Systems and Society, The Australian National Universityen
local.contributor.affiliationWilliams, Allan; Administrative Services, The Australian National Universityen
local.identifier.ariespublicationU3488905xPUB25273en
local.identifier.citationvolume93en
local.identifier.doi10.1016/j.procs.2016.07.174en
local.identifier.pure3dc3c05c-4335-46c3-b416-4b9351f61c19en
local.identifier.urlhttps://www.scopus.com/pages/publications/84985961587en
local.type.statusPublisheden

Downloads