Image Image Image Image Image Image Image Image Image Image

BizCloud® Network | August 28, 2014

Scroll to top

Top

Be The First To Comment

HPC Application Performance on ESX 4.1: Stream

HPC Application Performance on ESX 4.1: Stream


vAlign=top width=96 rowSpan=2> 
width=96 colSpan=5>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center> style="FONT-SIZE: 12pt">Threads



width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center>1

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center>2

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center>4

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center>8

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center>16



vAlign=top width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; "> style="FONT-SIZE: 12pt">Copy

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center>6388

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center>12163

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center>20473

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center>26957

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center> style="FONT-SIZE: 12pt">26312



vAlign=top width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; "> style="FONT-SIZE: 12pt">Scalar

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center>5231

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center>10068

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center>17208

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center>25932

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center> style="FONT-SIZE: 12pt">26530



vAlign=top width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; "> style="FONT-SIZE: 12pt">Add

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center>7070

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center>13274

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center>21481

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center>29081

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center> style="FONT-SIZE: 12pt">29622



vAlign=top width=96>

style="FONT-SIZE: 12pt">Triad

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center"
align=center>6617

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center"
align=center>12505

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center"
align=center>21058

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center"
align=center>29328

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center"
align=center> style="FONT-SIZE: 12pt">29889

HPC Application Performance on ESX 4.1: Stream






Recently VMware has seen increased interest in migrating High Performance Computing (HPC)
applications to virtualized environments. This is due to the many advantages
virtualization brings to HPC, including consolidation, support for heterogeneous
OSes, ease of application development, security, job
migration, and cloud computing (all described href="http://feeds.vmware.com/l?s=_wordpress&r=wordpress&he=68747470253341253246253246636f6d6d756e69746965732e766d776172652e636f6d253246636f6d6d756e69747925324663746f253246686967682d706572666f726d616e6365&i=727373696e3a7461673a747970657061642e636f6d2c323030333a706f73742d36613030643833343163333238313533656630313333663366653863346339373062">here). Currently some subset of HPC
applications virtualize well from a performance perspective. Our long-term goal
is to extend this to all HPC apps, realizing that large-scale apps with the
lowest latency and highest bandwidth requirements will be the most challenging.
Users who run HPC apps are traditionally very sensitive to performance overhead,
so it is important to quantify the performance cost of virtualization and
properly weigh it against the advantages. Compared to commercial apps
(databases, web servers, and so on), which are VMware’s bread-and-butter, HPC
apps place their own set of requirements on the platform
(OS/hypervisor/hardware) in order to execute well. Two common ones are
low-latency networking (since a single app is often spread across a cluster of
machines) and high memory bandwidth. This article is the first in a series that
will explore these and other HPC performance subjects. Our goal will always be
to determine what works, what doesn’t, and how to get more of the former. The
benchmark reported on here is href="http://feeds.vmware.com/l?s=_wordpress&r=wordpress&he=687474702533412532462532467777772e63732e76697267696e69612e65647525324673747265616d2532467265662e68746d6c&i=727373696e3a7461673a747970657061642e636f6d2c323030333a706f73742d36613030643833343163333238313533656630313333663366653863346339373062">Stream, which is a standard tool designed
to measure memory bandwidth. It is a “worst case” micro-benchmark; real
applications will not achieve higher memory bandwidth.

Configuration

All tests were performed on an HP DL380 with two Intel X5570 processors, 48 GB memory (12
× 4 GB DIMMs), and four 1-GbE NICs (Intel Pro/1000 PT Quad Port Server Adapter)
connected to a switch. Guest and native OS is RHEL 5.5 x86_64. Hyper-threading
is enabled in the BIOS, so 16 logical processors are available. Processors and
memory are split between two NUMA nodes. A pre-GA lab version of ESX 4.1 was
used, build 254859.

Test Results

The OpenMP version of Stream is used. It is built using a
compiler switch as follows:


gcc -O2 -fopenmp stream.c -o stream


The number of simultaneous threads is controlled by an environment
variable:


export OMP_NUM_THREADS=8


The array size (N) and number of iterations (NTIMES) are hard-wired in the code as
N=108 (for a single machine) and NTIMES=40. The large array size
ensures that the processor cache provides little or no benefit. Stream reports
maximum memory bandwidth performance in MB/sec for four tests: copy, scale, add,
and triad (see the above link for descriptions of these). M stands for 1
million, not 220. Here are the native results, as a function of the
number of threads:


Table 1. Native memory bandwidth, MB/s

cellSpacing=0 cellPadding=0 border=1>


Note that the scaling starts to fall off after two threads and the memory links are
essentially saturated at 8 threads. This is one reason why HPC apps often do not
see much benefit from enabling Hyper-Threading. To achieve the maximum aggregate
memory bandwidth in a virtualized environment, two virtual machines (VMs) with 8
vCPUs each were used. This is appropriate only for
modeling apps that can be split across multiple machines. One instance of stream
with N=5×107 was run in each VM simultaneously so the total amount of
memory accessed was the same as in the native test. The advanced configuration
option preferHT=1 is used (see below). Bandwidths
reported by the VMs are summed to get the total. The results are shown in Table
2: just slightly greater bandwidth than for the corresponding native case.


Table 2. Virtualized total memory bandwidth, MB/s, 2 VMs, preferHT=1

cellSpacing=0 cellPadding=0 border=1>


vAlign=top width=96 rowSpan=2> 
width=96 colSpan=4>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center>Total
threads



width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center>2

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center>4

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center>8

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center>16



vAlign=top width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; "> style="FONT-SIZE: 12pt">Copy

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center>12535

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center>22526

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center>27606

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center> style="FONT-SIZE: 12pt">27104



vAlign=top width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; "> style="FONT-SIZE: 12pt">Scalar

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center>10294

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center>18824

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center>26781

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center> style="FONT-SIZE: 12pt">26537



vAlign=top width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; "> style="FONT-SIZE: 12pt">Add

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center>13578

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center>24182

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center>30676

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center; "
align=center> style="FONT-SIZE: 12pt">30537



vAlign=top width=96>

style="FONT-SIZE: 12pt">Triad

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center"
align=center>13070

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center"
align=center>23476

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center"
align=center>30449

width=96>
style="MARGIN-BOTTOM: 0pt; LINE-HEIGHT: normal; TEXT-ALIGN: center"
align=center> style="FONT-SIZE: 12pt">30010


It is apparent that the Linux “first-touch” scheduling algorithm together with the
simplicity of the Stream algorithm are enough to ensure that nearly all memory
accesses in the native tests are “local” (that is, the processor each thread
runs on and the memory it accesses both belong to the same NUMA node). In ESX
4.1 NUMA information is not passed to the guest OS and (by default) 8-vCPU VMs
are scheduled across NUMA nodes in order to take advantage of more physical
cores. This means that about half of memory accesses will be “remote” and that
in the default configuration one or two VMs must produce significantly less
bandwidth than the native tests. Setting preferHT=1
tells the ESX scheduler to count logical processors (hardware threads) instead
of cores when determining if a given VM can fit on a NUMA node. In this case
that forces both memory and CPU of an 8-vCPU VM to be scheduled on a single NUMA
node. This guarantees all memory accesses are local and the aggregate bandwidth
of two VMs can equal or exceed native bandwidth. Note that a single VM cannot
match this bandwidth. It will get either half of it (because it’s using the
resources of only one NUMA node), or about 70% (because half the memory accesses
are remote). In both native and virtual environments, the maximum bandwidth of
purely remote memory accesses is about half that of purely local. On machines
with more NUMA nodes, remote memory bandwidth may be less and the importance of
memory locality even greater.

Summary

In both
native and virtualized environments, equivalent maximum memory bandwidth can be
achieved as long as the application is written or configured to use only local
memory. For native this means relying on the Linux “first-touch” scheduling
algorithm (for simple apps) or implementing explicit mechanisms in the code
(usually difficult if the code wasn’t designed for NUMA). For virtual a
different mindset is needed: the application needs to be able to run across
multiple machines, with each VM sized to fit on a NUMA node. On machines with
hyper-threading enabled, preferHT=1 needs to be set
for the larger VMs. If these requirements can be met, then a valuable feature of
virtualization is that the app needs to have no NUMA awareness at all; NUMA
scheduling is taken care of by the hypervisor (for all apps, not just for those
where Linux is able to align threads and memory on the same NUMA node). For
those apps where these requirements can’t be met (ones that need a large single
instance OS), current development focus is on relaxing these requirements so
they are more like native, while retaining the above advantage for small
VMs.


      
Download VMware Products
 | Privacy
 | Update Feed Preferences 

        Copyright © 2010 VMware, Inc. All rights reserved.

Submit a Comment

Facebook IconYouTube IconTwitter IconBizCloud on LinkedInBizCloud on LinkedIn
More in Articles, Cloud Computing, News (1333 of 1730 articles)


A cloud computing process is one which staggers multiple networks sometime and mainly unknown to the webmaster into a network ...