A new Promising Open Access Journal in HPC/Supercomputing!

The recent open-access journal movement is spreading quickly. It is indeed a very good idea to establish journals that are free to the whole community since the community does the research, the writing, and the refereeing while printed journal copies become less and less relevant. One such journal recently appeared to support the high-performance computing/supercomputing community: “Supercomputing Frontiers and Innovation”.

The journal’s leads are Jack Dongarra and Vladimir Voevodin and they are supported by a world-class editorial board (spoiler: I am on the board as well).

The first volume appeared in two parts: part one and part two. As one would expect from an open-access journal, one can download all articles and the whole journal as pdf. I am happy to have one of the limited-edition hard-copies of the second journal:

I published an overview of collective operation algorithms and analytic performance models for time and energy in this journal. It has been generally very pleasant to work with the staff and the open access guarantees quick and wide distribution without paywalls.

I read both issues with great interest and found the papers of very high quality. Superfri has a good chance to quickly emerge as a leading journal in high-performance computing. Submissions are open at http://superfri.org/.

11 SPCL@ETH activities at SC14

The Intl. Supercomputing (SC) conference is clearly the main event in HPC. It’s program is broad and more than 10k people attend annually. SPCL is mainly focused on the technical program which makes SC the top-tier conference in HPC. It is the main conference of a major ACM SIG (SIGHPC).

This year, SPCL members co-authored three technical papers in the very competitive program with several thousand attendees! One was even nominated for the best paper award — and to take it upfront, we got it! Congrats Maciej! All talks were very well attended (more than 100 people in the room).

All of these talks were presented by collaborators, so I was hoping to be off the hook. Well, not quite, because I gave seven (7!) invited talks at various events and participated in teaching a full-day tutorial on advanced MPI. The highlight was a keynote at the LLVM workshop. I was also running around all the time because I co-organized the overall workshop program (with several thousand attendees) at SC14.

So let me share my experience of all these exciting events in chronological order!

1) Sunday: IA3 Workshop on Irregular Applications: Architectures & Algorithms

This workshop was very nice. Kicked off by top-class keynotes from Onur Mutlu (CMU) and Keshav Pingali (UT) through great paper talks and a panel in the afternoon. I served on the panel with some top-class people and it was a lot of fun!


Giving my panel presentation on accelerators for graph computing.


Arguing during the panel discussion (Hadoop right now) with (left to right): Keshav Pingali (UT Austin), John Shalf (Berkeley), me (ETH), Clayton Chandler (DOD), Benoit Dupont de Dinechin (Kalray), Onur Mutlu (CMU, Maya Gokhale (LLNL). A rather argumentative group :-) .

My slides can be found here.

2) Monday – LLVM Workshop

It was long overdue to discuss the use of LLVM in the context of HPC. So thanks to Hal Finkel and Jeff Hammond for organizing this fantastic workshop! I kicked it off with some considerations about runtime-recompilation and how to improve codes.

The volunteers counted around 80 attendees in the room! Not too bad for a workshop. My slides on “A case for runtime recompilation in HPC” are here.

3) Monday – Advanced MPI Tutorial

Our tutorial attendee numbers keep growing! More than 67 people registered but it felt like more were showing up for the tutorial. We also released the new MPI books, especially the “Using Advanced MPI” book which shortly after became the top new release on Amazon in the parallel processing category.

4) Tuesday – Graph 500 BoF

There, I released the fourth Green Graph 500 list. Not much new happened on the list (same as for the Top500 and Graph500) but the BoF
was still fun! Peter Kogge presented some interesting views on the data of the list. My slides can be found here.

5) Tuesday – LLVM BoF

Concurrently with the Graph 500 BoF was the LLVM BoF, so I had to speak at both at the same time. Well, that didn’t go too well (I’m still only one person — apologies to Jim). I only made 20% of this BoF but it was great! Again, very good turnout, LLVM is certainly becoming more important every year. My slides are here.

6) Tuesday – Simulation BoF

There are many simulators in HPC! Often for different purposes but also sometimes for similar ones. We discussed how to collaborate and focus our efforts better. I represented LogGOPSim, SPCL’s discrete event simulator for parallel applications.

My talk summarized features and achievements and slides can be found here.

7) Tuesday – Paper Talk “Slim Fly: A Cost Effective Low-Diameter Network Topology”

Our paper was up for Best Student Paper and Maciej did a great job presenting it. But no need to explain, go and read it here!


Maciej presenting the paper! Well done.

8) Wednesday – PADAL BoF – Programming Abstractions for Data Locality

Programming has to become more data-centric as architectures evolve. This BoF followed an earlier workshop in Lugano on the same topic. It was great — no slides this time, just an open discussion! I hope I didn’t upset David Padua :-) .


Didem Unat moderated and the panelists were — Paul Kelly (Imperial), Brad Chamberlain (Cray), Naoya Maruyama (TiTech), David Padua (UIUC), me (ETH), Michael Garland (NVIDIA). It was a truly lively BoF :-) .

But hey, I just got it in writing from the Swiss that I’m not qualified to talk about this topic — bummer!


The room was packed and the participation was great. We didn’t get to the third question! I loved the education question, we need to change the way we teach parallel computing.

9) Wednesday – Paper Talk “Understanding the Effects of Communication and Coordination on Checkpointing at Scale”

Kurt Ferreira, a collaborator from Sandia was speaking on unexpected overheads of uncoordinated checkpointing analyzed using LogGOPSim (it’s a cool name!!). Go read the paper if you want to know more!


Kurt speaking.

10) Thursday – Paper Talk “Fail-in-Place Network Design: Interaction between Topology, Routing Algorithm and Failures”

Presented by Jens Domke, a collaborator from Tokyo Tech (now at TU Dresden). A nice analysis of what happens to a network when links or routers fail. Read about it here.


Jens speaking.

11) Thursday – Award Ceremony

Yes, somewhat unexpectedly, we go the best student paper award. The second major technical award in a row for SPCL (after last year’s best paper).


Happy :-) .

Coverage by Michele @ HPC-CH and Rich @ insideHPC.

The MPI 3.0 Book – Using Advanced MPI

Our book on “Using Advanced MPI” will appear in about a month — now it’s the time to pre-order on Amazon at a reduced price. It is released by the prestigious MIT Press, a must read for parallel computing experts.

The book contains everything advanced MPI users need to know. It presents all important concepts of MPI 3.0 (including all newly added functions such as nonblocking collectives and the largely extended One Sided functionality). But the key is that the book is written in an example-driven style. All functions are motivated with use-cases and working code is available for most. This follows the successful tradition of the “Using MPI” series lifting it to MPI-3.0 and hopefully makes it an exciting read!

David Bader’s review hits the point

With the ubiquitous use of multiple cores to accelerate applications ranging from science and engineering to Big Data, programming with MPI is essential. Every software developer for high performance applications will find this book useful for programming on modern multicore, cluster, and cloud computers.

Here is a quick overview of the contents:

Section 1: “Introduction” provides a brief overview of the history of MPI and briefly summarizes the basic concepts.

Section 2: “Working with Large Scale Systems” contains examples of how to create highly-scalable systems using nonblocking collective operations, the new distributed graph topology for MPI topology mapping, neighborhood collectives, and advanced communicator creation functions. It equips readers with all information to write codes that are highly-scalable. It even describes how fault-tolerant applications could be written using a high-quality MPI implementation.

Section 3: “Introduction to Remote Memory Operations” is a gentle and light introduction to RMA (One Sided) programming using MPI-3.0. It starts with the concepts of memory exposure (windows) and simple data movement. It presents various example problems followed by practical advice to avoid common pitfalls. It concludes with a discussion on performance.

Section 4: “Advanced Remote Memory Access” will make you a clear expert in RMA programming, it covers advanced concepts such as passive target mode, allocating MPI windows using various examples. It also discusses memory models and scalable synchronization approaches.

Section 5: “Using Shared Memory with MPI” explains MPI’s strategy to shared memory. MPI-3.0 added support for allocating shared memory which essentially enables the new hybrid programming model “MPI+MPI“. This section explains guarantees that MPI provides (and what it does not provide) and several use-cases for shared memory windows.

Section 6: “Hybrid Programming” provides a detailed discussion on how to use MPI in cooperation with other programming models, for example threads or OpenMP. Hybrid programming is emerging to a standard technique and MPI-3.0 introduces several functions to ease the cooperation with others.

Section 7: “Parallel I/O” is most important in the future Big Data world. MPI provides a large set of facilities to support operations on large distributed data sets. We discuss how MPI supports contiguous and noncontiguous accesses as well as the consistency of file operations. Furthermore, we provide hints for improving the performance of MPI I/O.

Section 8: “Coping with Large Data” once Big Data sets are in main memory, we may need to communicate them. MPI-3.0 supports handling large data (>2 GiB) through derived datatypes. We explain how to enable this support and limitations of the current interface.

Section 9: “Support for Performance and Correctness Debugging” is addressed at very advanced programmers as well as tool developers. It describes the MPI tools interface which allows to introspect internals of the MPI library. Its flexible interface supports performance counter and control variables to influence the behavior of MPI. Advanced expert programmers will love this interface for architecture-specific tuning!

Section 10: “Dynamic Process Management” explains how processes can be created and managed. This feature enables growing and shrinking of MPI jobs during their execution and fosters new programming paradigms if it is supported by the batch systems. We only discuss the MPI part in this chapter though.

Section 11: “Working with Modern Fortran” is a must-read for Fortran programmers! How does MPI support type-safe programming and what are the remaining pitfalls and problems in Fortran?

Section 12: “Features for Libraries” addresses advanced library writers and described principles how to develop portable high-quality MPI libraries.

SPCL hike v2.0, this time to the Grosse Mythen

So we did it again, we celebrated Greg’s, Maciej’s, and Bogdan’s recent successes with a barbecue. But we were not SPCL if it was any barbecue … of course it was at the top of a mountain, this time the “Grosse Mythen”.

Remember v1.0 to Mount Rigi? Switzerland is absolutely awesome! Mountains, everything is green, ETH :-) .

Here are some impressions:

track
The complete tour was 6.9 kms a total walking time of 3.35 hours and an altitude difference of only 750m (from 1149m to 1899m). Much shorter and less altitude then our last trip. But not less fun! And yes, my GPS was a bit off :-/.

IMG_1936
So this is how it looked from the start — I have to say pretty impressive. But it turned out to be much much simpler than it seems, and MUCH simpler than Rigi was :-) .

IMG_1940
Do you see the flag on top (yeah, it’s the red pixel on the right side in this resolution)? That’s where we’ll hike!

IMG_1946
Our first real view towards Brunni.

IMG_1943
The path is actually at the beginning a bit more stressful than later but overall simple.

IMG_1947
It still seems impressive!

IMG_1957
Lots of planes around Mythen, some seem rather historic, like this one.

IMG_1951
First stage done — arrived at Holzegg and view down to Brunni (yeah, we could have taken the cable car but are we men or what?). SPCLers walk up mountains!

IMG_1961
Now the steep part begins, it’s actually a bit dangerous — many places where you can fall a couple of 100 meters :-) .

IMG_1971
Mac still looks too good … we should have taken more water and food :-) . Remember Rigi?

IMG_1965
Nice views and deep abysses.

IMG_1976
The path is basically vertically (in serpentines) up a rock wall. Nice! You should not be afraid of height …

IMG_1972
… because you will constantly see beautiful things like this …

IMG_1986
… or this …

IMG_1993
… or horror abysses right at the trail like this :-) .

IMG_2003
But there were helpers, these nice chains probably saved out life more than once.

IMG_1988
Mac acquired a second backpack and some stones on the way … and he’s still looking too good!

IMG_2000
A nice bench … again, for people not afraid of height.

IMG_2010
The chains at the abyss :-) .

IMG_2007
The neighboring “Kleiner Mythen” is apparently much harder to climb. And it’s smaller, so why would we climb it anyway!? ;-)

IMG_2011
The path – awesome! Walking a nice and thin ridge.

IMG_2024
And again, some nice opportunities to fly, aehem fall.

IMG_2012
Still snow around the top in May.

IMG_2028
Finally, the top.

IMG_2029
The top – we made it! Most importantly: the Swiss flag :-) .

IMG_2032
Beautiful views …

IMG_2030
The last ascent to the very top … still looking too fresh!

IMG_2039
Beautiful.

IMG_2045
We all made it to the top alive (and later back down).

IMG_2041
Nice weather actually.

IMG_2060
Relaxing with a beer … tststs.

IMG_2059
More beer!?

IMG_2059
Meanwhile food preparations start.

IMG_2062
The grill didn’t start that well … we should have sent smoke signals down to the valley ;-) .

IMG_2065
Some folks took shifts in blowing.

IMG_2064
We brought some nice food … to the nice view!

IMG_2070
And ate it quickly ;-) .

IMG_2072
To avoid fights with the locals!

IMG_2071
Finally the grill started … took 30 mins or so ;-) .

IMG_2080
Too much food!

IMG_2083
Beautiful views, did I mention that it was rather high?

IMG_2082
But most beautiful views — Rigi was just left of this.

IMG_2099
And of course, if you travel with a Polak, you’ll get some top-vodka.

IMG_2090
The car was waiting patently in Brunni.

IMG_2118
On the way back down.

IMG_2112
Many opportunities for free falls :-) .

IMG_2126
And beautiful alpine flowers.

IMG_2130
I drove, and we made some contact with cows :-) .

IMG_2128
Others took the bus back home.

After all, *awesome* and very efficient, the whole tour took 7.5 hours :-) . So we set some new standards for the SPCL hike v 3.0!

ExaMPI’13 Workshop at SC13

I wanted to highlight the ExaMPI’13 workshop at SC13. It was a while ago but it is worth reporting!

The workshop’s theme was “Exascale MPI” and the workshop addressed several topics on how to move MPI to the next big divisible-by-10^3 floating point number. Actually, for Exascale, it’s unclear if it’s only FLOPs, maybe it’s data now, but then, we easily have machines with Exabytes :-) . Anyway, MPI is an viable candidate to run on future large-scale machines, maybe at a low level.

A while ago, some colleagues and I summarized the issues that MPI faces in going to large scale: “MPI on Millions of Cores“. The conclusion was that it’s possible to move forward but some non-scalable elements need to be removed or avoided in MPI. This was right on topic for this workshop, and indeed, several authors of the paper were speaking!

The organizers invited me to give a keynote to kick off the event. I was talking about large-scale MPI and large-scale graph analysis and how this could be done in MPI. [Slides]

The very nice organizers sent me some pictures that I want to share here:


My keynote on large-scale MPI and graph algorithms.


The gigantic room was well filled (I’d guess more than 50 people).


Jesper talking about the EPIGRAM project to address MPI needs for the future large scales.


The DEEP strategy of Julich using inter-communicators (the first users I know of).


Pavan on our heterogeneous future, very nice insights.

All in all, a great workshop with a very good atmosphere. I received many good questions and had very good discussions afterwards.

Kudos to the organizers!

HiPEAC’13 in Vienna

This year, I attended my first HiPEAC conference. We had a paper at the main track. It was in Vienna and thus really easy to reach (1 hour by plane). I actually thought about commuting from Zurich every day.

Bogdan presented the paper and did a very good job! I received several positive comments afterwards. Kudos Bogdan!

SPCL at Supercomputing (SC13)

Supercomputing is the premier conference in high performance and parallel computing. With more than 10,000 attendees, it’s also the largest and highest-impact conference in the field. Literally everybody is there. Torsten summarized the role of the conference in a blog poster over at CACM.

SPCL had a great year at Supercomputing 2013 (SC13)! We’ve been involved in multiple elements of the technical program:

  1. Three paper talks (Alexandru, Andrew, Robert) in the technical papers program
  2. Two posters (Aditya, Robert) in the posters program
  3. SPCL researchers received three awards
  4. Torsten gave a keynote at the ExaMPI’13 workshop
  5. Torsten co-presented a well-attended (~50 people) tutorial on advanced MPI programming
  6. Torsten revealed the 2nd Green Graph 500 List at the Graph 500 BoF
  7. Torsten co-organized the Emerging Technologies program

Here are some impressions:

IMG_1723_small
Impressions from the Emerging Technologies Booth

IMG_1746_small
Impressions from the Emerging Technologies Booth

IMG_1752_small
Impressions from the Emerging Technologies Booth

IMG_1735_small
Impressions from the Emerging Technologies Booth

IMG_1840_small
Emerging Technology Booth Talks were generally well attended!

IMG_1787_small
Maciej and his session chair (Rajeev Thakur) are preparing for the presentation of our best paper candidate.

IMG_1789_small
The large room was packed (this shows only the right half, left was as full).

IMG_1803_small
Robert and Maciej fielding questions.

IMG_1856_small
Robert presenting his poster … unfortunately, Maciej didn’t take a picture of Aditya and his poster which was upstairs (and looked at least as good :-) ).

IMG_1807_small
Torsten releasing the 2nd Green Graph 500 list with surprises in the top ranks!

sc13_best_paper
The SC13 Best Paper team and presentation!

sc13_yasc
Torsten receives his “Young Achievers in Scalable Computing” Award.

IMG_1952_small
Torsten receives his “Young Achievers in Scalable Computing” Award.

IMG_1959_small
Robert received the ACM SRC Bronze medal.

All in all – a great success! Congrats and thanks to everyone who contributed!

Some more nice pictures can be found here.

Emerging Technologies ramping up at SC13

A new element in this year’s Supercomputing SC13 conference, Emerging Technologies, is emerging at SC13 right now. The booth of impressive size (see below) features 17 diverse high-impact projects that will change the future of Supercomputing!

et_booth_bringup

Emerging Technologies (ET) is part of the technical program and all proposals have been reviewed in an academically rigorous process. However, as opposed to the standard technical program, ET will be located at the main showfloor (booth #3547). This enables to demonstrate technologies and innovations that would otherwise not reach the showfloor.

The standing exhibit is complemented by a series of short talks about the technologies. Those talks will be during the afternoons on Tuesday and Wednesday in the neighboring “HPC Impact Showcase” theater (booth #3947).

Check out http://sc13.supercomputing.org/content/emerging-technologies for the booth talks program!

Bob Lucas and I have been organizing the technical exhibit this year and Satoshi Matsuoka will run it for SC14 :-) .

So make sure to swing by if you’re at SC13. It’ll definitely be a great experience!