The MPI 3.0 Book – Using Advanced MPI

Our book on “Using Advanced MPI” will appear in about a month — now it’s the time to pre-order on Amazon at a reduced price. It is released by the prestigious MIT Press, a must read for parallel computing experts.

The book contains everything advanced MPI users need to know. It presents all important concepts of MPI 3.0 (including all newly added functions such as nonblocking collectives and the largely extended One Sided functionality). But the key is that the book is written in an example-driven style. All functions are motivated with use-cases and working code is available for most. This follows the successful tradition of the “Using MPI” series lifting it to MPI-3.0 and hopefully makes it an exciting read!

David Bader’s review hits the point

With the ubiquitous use of multiple cores to accelerate applications ranging from science and engineering to Big Data, programming with MPI is essential. Every software developer for high performance applications will find this book useful for programming on modern multicore, cluster, and cloud computers.

Here is a quick overview of the contents:

Section 1: “Introduction” provides a brief overview of the history of MPI and briefly summarizes the basic concepts.

Section 2: “Working with Large Scale Systems” contains examples of how to create highly-scalable systems using nonblocking collective operations, the new distributed graph topology for MPI topology mapping, neighborhood collectives, and advanced communicator creation functions. It equips readers with all information to write codes that are highly-scalable. It even describes how fault-tolerant applications could be written using a high-quality MPI implementation.

Section 3: “Introduction to Remote Memory Operations” is a gentle and light introduction to RMA (One Sided) programming using MPI-3.0. It starts with the concepts of memory exposure (windows) and simple data movement. It presents various example problems followed by practical advice to avoid common pitfalls. It concludes with a discussion on performance.

Section 4: “Advanced Remote Memory Access” will make you a clear expert in RMA programming, it covers advanced concepts such as passive target mode, allocating MPI windows using various examples. It also discusses memory models and scalable synchronization approaches.

Section 5: “Using Shared Memory with MPI” explains MPI’s strategy to shared memory. MPI-3.0 added support for allocating shared memory which essentially enables the new hybrid programming model “MPI+MPI“. This section explains guarantees that MPI provides (and what it does not provide) and several use-cases for shared memory windows.

Section 6: “Hybrid Programming” provides a detailed discussion on how to use MPI in cooperation with other programming models, for example threads or OpenMP. Hybrid programming is emerging to a standard technique and MPI-3.0 introduces several functions to ease the cooperation with others.

Section 7: “Parallel I/O” is most important in the future Big Data world. MPI provides a large set of facilities to support operations on large distributed data sets. We discuss how MPI supports contiguous and noncontiguous accesses as well as the consistency of file operations. Furthermore, we provide hints for improving the performance of MPI I/O.

Section 8: “Coping with Large Data” once Big Data sets are in main memory, we may need to communicate them. MPI-3.0 supports handling large data (>2 GiB) through derived datatypes. We explain how to enable this support and limitations of the current interface.

Section 9: “Support for Performance and Correctness Debugging” is addressed at very advanced programmers as well as tool developers. It describes the MPI tools interface which allows to introspect internals of the MPI library. Its flexible interface supports performance counter and control variables to influence the behavior of MPI. Advanced expert programmers will love this interface for architecture-specific tuning!

Section 10: “Dynamic Process Management” explains how processes can be created and managed. This feature enables growing and shrinking of MPI jobs during their execution and fosters new programming paradigms if it is supported by the batch systems. We only discuss the MPI part in this chapter though.

Section 11: “Working with Modern Fortran” is a must-read for Fortran programmers! How does MPI support type-safe programming and what are the remaining pitfalls and problems in Fortran?

Section 12: “Features for Libraries” addresses advanced library writers and described principles how to develop portable high-quality MPI libraries.

SPCL hike v2.0, this time to the Grosse Mythen

So we did it again, we celebrated Greg’s, Maciej’s, and Bogdan’s recent successes with a barbecue. But we were not SPCL if it was any barbecue … of course it was at the top of a mountain, this time the “Grosse Mythen”.

Remember v1.0 to Mount Rigi? Switzerland is absolutely awesome! Mountains, everything is green, ETH :-) .

Here are some impressions:

track
The complete tour was 6.9 kms a total walking time of 3.35 hours and an altitude difference of only 750m (from 1149m to 1899m). Much shorter and less altitude then our last trip. But not less fun! And yes, my GPS was a bit off :-/.

IMG_1936
So this is how it looked from the start — I have to say pretty impressive. But it turned out to be much much simpler than it seems, and MUCH simpler than Rigi was :-) .

IMG_1940
Do you see the flag on top (yeah, it’s the red pixel on the right side in this resolution)? That’s where we’ll hike!

IMG_1946
Our first real view towards Brunni.

IMG_1943
The path is actually at the beginning a bit more stressful than later but overall simple.

IMG_1947
It still seems impressive!

IMG_1957
Lots of planes around Mythen, some seem rather historic, like this one.

IMG_1951
First stage done — arrived at Holzegg and view down to Brunni (yeah, we could have taken the cable car but are we men or what?). SPCLers walk up mountains!

IMG_1961
Now the steep part begins, it’s actually a bit dangerous — many places where you can fall a couple of 100 meters :-) .

IMG_1971
Mac still looks too good … we should have taken more water and food :-) . Remember Rigi?

IMG_1965
Nice views and deep abysses.

IMG_1976
The path is basically vertically (in serpentines) up a rock wall. Nice! You should not be afraid of height …

IMG_1972
… because you will constantly see beautiful things like this …

IMG_1986
… or this …

IMG_1993
… or horror abysses right at the trail like this :-) .

IMG_2003
But there were helpers, these nice chains probably saved out life more than once.

IMG_1988
Mac acquired a second backpack and some stones on the way … and he’s still looking too good!

IMG_2000
A nice bench … again, for people not afraid of height.

IMG_2010
The chains at the abyss :-) .

IMG_2007
The neighboring “Kleiner Mythen” is apparently much harder to climb. And it’s smaller, so why would we climb it anyway!? ;-)

IMG_2011
The path – awesome! Walking a nice and thin ridge.

IMG_2024
And again, some nice opportunities to fly, aehem fall.

IMG_2012
Still snow around the top in May.

IMG_2028
Finally, the top.

IMG_2029
The top – we made it! Most importantly: the Swiss flag :-) .

IMG_2032
Beautiful views …

IMG_2030
The last ascent to the very top … still looking too fresh!

IMG_2039
Beautiful.

IMG_2045
We all made it to the top alive (and later back down).

IMG_2041
Nice weather actually.

IMG_2060
Relaxing with a beer … tststs.

IMG_2059
More beer!?

IMG_2059
Meanwhile food preparations start.

IMG_2062
The grill didn’t start that well … we should have sent smoke signals down to the valley ;-) .

IMG_2065
Some folks took shifts in blowing.

IMG_2064
We brought some nice food … to the nice view!

IMG_2070
And ate it quickly ;-) .

IMG_2072
To avoid fights with the locals!

IMG_2071
Finally the grill started … took 30 mins or so ;-) .

IMG_2080
Too much food!

IMG_2083
Beautiful views, did I mention that it was rather high?

IMG_2082
But most beautiful views — Rigi was just left of this.

IMG_2099
And of course, if you travel with a Polak, you’ll get some top-vodka.

IMG_2090
The car was waiting patently in Brunni.

IMG_2118
On the way back down.

IMG_2112
Many opportunities for free falls :-) .

IMG_2126
And beautiful alpine flowers.

IMG_2130
I drove, and we made some contact with cows :-) .

IMG_2128
Others took the bus back home.

After all, *awesome* and very efficient, the whole tour took 7.5 hours :-) . So we set some new standards for the SPCL hike v 3.0!

ExaMPI’13 Workshop at SC13

I wanted to highlight the ExaMPI’13 workshop at SC13. It was a while ago but it is worth reporting!

The workshop’s theme was “Exascale MPI” and the workshop addressed several topics on how to move MPI to the next big divisible-by-10^3 floating point number. Actually, for Exascale, it’s unclear if it’s only FLOPs, maybe it’s data now, but then, we easily have machines with Exabytes :-) . Anyway, MPI is an viable candidate to run on future large-scale machines, maybe at a low level.

A while ago, some colleagues and I summarized the issues that MPI faces in going to large scale: “MPI on Millions of Cores“. The conclusion was that it’s possible to move forward but some non-scalable elements need to be removed or avoided in MPI. This was right on topic for this workshop, and indeed, several authors of the paper were speaking!

The organizers invited me to give a keynote to kick off the event. I was talking about large-scale MPI and large-scale graph analysis and how this could be done in MPI. [Slides]

The very nice organizers sent me some pictures that I want to share here:


My keynote on large-scale MPI and graph algorithms.


The gigantic room was well filled (I’d guess more than 50 people).


Jesper talking about the EPIGRAM project to address MPI needs for the future large scales.


The DEEP strategy of Julich using inter-communicators (the first users I know of).


Pavan on our heterogeneous future, very nice insights.

All in all, a great workshop with a very good atmosphere. I received many good questions and had very good discussions afterwards.

Kudos to the organizers!

HiPEAC’13 in Vienna

This year, I attended my first HiPEAC conference. We had a paper at the main track. It was in Vienna and thus really easy to reach (1 hour by plane). I actually thought about commuting from Zurich every day.

Bogdan presented the paper and did a very good job! I received several positive comments afterwards. Kudos Bogdan!

SPCL at Supercomputing (SC13)

Supercomputing is the premier conference in high performance and parallel computing. With more than 10,000 attendees, it’s also the largest and highest-impact conference in the field. Literally everybody is there. Torsten summarized the role of the conference in a blog poster over at CACM.

SPCL had a great year at Supercomputing 2013 (SC13)! We’ve been involved in multiple elements of the technical program:

  1. Three paper talks (Alexandru, Andrew, Robert) in the technical papers program
  2. Two posters (Aditya, Robert) in the posters program
  3. SPCL researchers received three awards
  4. Torsten gave a keynote at the ExaMPI’13 workshop
  5. Torsten co-presented a well-attended (~50 people) tutorial on advanced MPI programming
  6. Torsten revealed the 2nd Green Graph 500 List at the Graph 500 BoF
  7. Torsten co-organized the Emerging Technologies program

Here are some impressions:

IMG_1723_small
Impressions from the Emerging Technologies Booth

IMG_1746_small
Impressions from the Emerging Technologies Booth

IMG_1752_small
Impressions from the Emerging Technologies Booth

IMG_1735_small
Impressions from the Emerging Technologies Booth

IMG_1840_small
Emerging Technology Booth Talks were generally well attended!

IMG_1787_small
Maciej and his session chair (Rajeev Thakur) are preparing for the presentation of our best paper candidate.

IMG_1789_small
The large room was packed (this shows only the right half, left was as full).

IMG_1803_small
Robert and Maciej fielding questions.

IMG_1856_small
Robert presenting his poster … unfortunately, Maciej didn’t take a picture of Aditya and his poster which was upstairs (and looked at least as good :-) ).

IMG_1807_small
Torsten releasing the 2nd Green Graph 500 list with surprises in the top ranks!

sc13_best_paper
The SC13 Best Paper team and presentation!

sc13_yasc
Torsten receives his “Young Achievers in Scalable Computing” Award.

IMG_1952_small
Torsten receives his “Young Achievers in Scalable Computing” Award.

IMG_1959_small
Robert received the ACM SRC Bronze medal.

All in all – a great success! Congrats and thanks to everyone who contributed!

Some more nice pictures can be found here.

Emerging Technologies ramping up at SC13

A new element in this year’s Supercomputing SC13 conference, Emerging Technologies, is emerging at SC13 right now. The booth of impressive size (see below) features 17 diverse high-impact projects that will change the future of Supercomputing!

et_booth_bringup

Emerging Technologies (ET) is part of the technical program and all proposals have been reviewed in an academically rigorous process. However, as opposed to the standard technical program, ET will be located at the main showfloor (booth #3547). This enables to demonstrate technologies and innovations that would otherwise not reach the showfloor.

The standing exhibit is complemented by a series of short talks about the technologies. Those talks will be during the afternoons on Tuesday and Wednesday in the neighboring “HPC Impact Showcase” theater (booth #3947).

Check out http://sc13.supercomputing.org/content/emerging-technologies for the booth talks program!

Bob Lucas and I have been organizing the technical exhibit this year and Satoshi Matsuoka will run it for SC14 :-) .

So make sure to swing by if you’re at SC13. It’ll definitely be a great experience!

The end of an old reliable friend

Today I typed the last command on my long-running server (serving www.unixer.de since 2006 until yesterday):

IMG_3296_small

benten ~ $ uptime
 01:31:47 up 676 days, 16:20,  5 users,  load average: 2.08, 1.42, 1.39
benten ~ $ dd if=/dev/zero of=/dev/hda &
benten ~ $ dd if=/dev/zero of=/dev/hdb &
benten ~ $ dd if=/dev/zero of=/dev/hdc &

This machine was an old decommissioned cluster node (well, a result of
combining two half-working nodes) and served me since 2006 (seven
years!) very well. Today, it was shut off.

It’s nearly historic (single-core!):

benten ~ $ cat /proc/cpuinfo
processor       : 0
vendor_id       : GenuineIntel
cpu family      : 15
model           : 1
model name      : Intel(R) Pentium(R) 4 CPU 1.50GHz
stepping        : 2
cpu MHz         : 1495.230
cache size      : 256 KB
fdiv_bug        : no
hlt_bug         : no
f00f_bug        : no
coma_bug        : no
fpu             : yes
fpu_exception   : yes
cpuid level     : 2
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi m
mx fxsr sse sse2 ss ht tm up pebs bts
bogomips        : 2995.24
clflush size    : 64
power management:

benten ~ $ free
             total       used       free     shared    buffers     cached
Mem:        775932     766372       9560          0      85720     273792
-/+ buffers/cache:     406860     369072
Swap:       975200      57904     917296

benten ~ $ fdisk -l 
Disk /dev/hda: 20.0 GB, 20020396032 bytes
Disk /dev/hdb: 500.1 GB, 500107862016 bytes
Disk /dev/hdc: 80.0 GB, 80026361856 bytes
Disk /dev/hdd: 500.1 GB, 500107862016 bytes

benten ~ $ cat /proc/mdstat
Personalities : [raid1] 
md0 : active raid1 hdb1[0] hdd1[2](F)
      488383936 blocks [2/1] [U_]

IMG_3296_small

Advanced MPI Programming Tutorial at Supercomputing 2013

Pavan Balaji, Jim Dinan, Rajeev Thakur and I are giving our Advanced MPI Programming tutorial at Supercomputing 2013 on Sunday November 17th.

Are you wondering about the new MPI-3 standard? How it affects you as a scientific or HPC programmer and what nice new features you can use to make your life easier and your application faster? Then you should not miss our tutorial.

Our abstract summarizes the main topics:

The vast majority of production parallel scientific applications today use MPI and run successfully on the largest systems in the world. For example, several MPI applications are running at full scale on the Sequoia system (on ?1.6 million cores) and achieving 12 to 14 petaflops/s of sustained performance. At the same time, the MPI standard itself is evolving (MPI-3 was released late last year) to address the needs and challenges of future extreme-scale platforms as well as applications. This tutorial will cover several advanced features of MPI, including new MPI-3 features, that can help users program modern systems effectively. Using code examples based on scenarios found in real applications, we will cover several topics including efficient ways of doing 2D and 3D stencil computation, derived datatypes, one-sided communication, hybrid (MPI + shared memory) programming, topologies and topology mapping, and neighborhood and nonblocking collectives. Attendees will leave the tutorial with an understanding of how to use these advanced features of MPI and guidelines on how they might perform on different
platforms and architectures.

This tutorial is about advanced use of MPI. It will cover several advanced features that are part of
MPI-1 and MPI-2 (derived datatypes, one-sided communication, thread support, topologies and topology
mapping) as well as new features that were recently added to MPI as part of MPI-3 (substantial additions
to the one-sided communication interface, neighborhood collectives, nonblocking collectives, support for
shared-memory programming).

Implementations of MPI-2 are widely available both from vendors and open-source projects. In addition,
the latest release of the MPICH implementation of MPI supports all of MPI-3. Vendor implementations
derived from MPICH will soon support these new features. As a result, users will be able to use in practice
what they learn in this tutorial.

The tutorial will be example driven, reflecting scenarios found in real applications. We will begin with
a 2D stencil computation with a 1D decomposition to illustrate simple Isend/Irecv based communication.

We will then use a 2D decomposition to illustrate the need for MPI derived datatypes. We will introduce
a simple performance model to demonstrate what performance can be expected and compare it with actual
performance measured on real systems. This model will be used to discuss, evaluate, and motivate the rest
of the tutorial.
We will use the same 2D stencil example to illustrate various ways of doing one-sided communication in
MPI and discuss the pros and cons of the different approaches as well as regular point-to-point communica-
tion. We will then discuss a 3D stencil without getting into complicated code details.
We will use examples of distributed linked lists and distributed locks to illustrate some of the new ad-
vanced one-sided communication features, such as the atomic read-modify-write operations.
We will discuss the support for threads and hybrid programming in MPI and provide two hybrid ver-
sions of the stencil example: MPI+OpenMP and MPI+MPI. The latter uses the new features in MPI-3 for
shared-memory programming. We will also discuss performance and correctness guidelines for hybrid pro-
gramming.

We will introduce process topologies, topology mapping, and the new “neighborhood” collective func-
tions added in MPI-3. These collectives are particularly intended to support stencil computations in a scalable
manner, both in terms of memory consumption and performance.
We will conclude with a discussion of other features in MPI-3 not explicitly covered in this tutorial
(interface for tools, Fortran 2008 bindings, etc.) as well as a summary of recent activities of the MPI Forum
beyond MPI-3.

Our planned agenda for the day is

  1. Introduction (8.30–10.00)
    • Background: What is MPI
    • MPI-1, MPI-2, MPI-3
    • 2D stencil code with 1D decomposition: Isend/Irecv version
    • 2D stencil code with 2D decomposition: Introduce derived datatypes
    • Introduce simple performance modeling and measurement
  2. One-Sided Communication (10.30–12.00)
    • Basics of one-sided communication or remote memory access (RMA)
    • 2D stencil code with 1D decomposition: RMA with 3 forms of synchronization
    • 3D stencil: What changes and what to pay attention to
    • Introduce other features of MPI-3 RMA
    • Linked list or distributed lock example demonstrating new MPI-3 RMA features
  3. Lunch (12.00–1.30)
  4. MPI and Threads (1.30–3.00)
    • What does the MPI standard specify about threads
    • How does it enable hybrid programming
    • Hybrid (MPI+OpenMP) version of 2D stencil code
    • Hybrid (MPI+MPI) version of 2D stencil code using MPI-3 shared-memory support
    • Performance and correctness guidelines for hybrid programming
  5. Topologies, Neighborhood/Nonblocking Collectives (3.30-5.00)
    • Topologies and topology mapping
    • 2D stencil code with 2D decomposition using neighborhood collectives
    • MPI-3 nonblocking collectives with example
    • Summary of other features in MPI-3
    • Summary of recent activities of the MPI Forum
    • Conclusions

We’re looking forward to many interesting discussions!