SPCL activies at SC17

SC17 is over, and even though it was my 10th anniversary, it wasn’t the best of the SC series. Actually, if you ask me personally, probably the worst but I promised to not discuss details here. Fortunately, I’ll be tech papers chair with Todd Gamblin as a vice next year, so we’ll make sure to remain purely technical. The SC series is and remains strong!

SPCL was again present in many areas across the technical program. Konstantin, Tobias, Salvatore, and I were involved in many things. Here are the thirteen most significant appearances:

1) Sunday: Torsten presented Capability Models for Manycore Memory Systems: A Case-Study with Xeon Phi KNL and the COSMO Weather Code at the Intel HPC Developer’s conference

Room was packed and people were standing :-). Slides

2) Sunday: Salvatore presented LogGOPSim version 2 at the ExaMPI workshop

3) Monday: Tobias talks about “Improved Loop Distribution in LLVM Using Polyhedral Dependences” at the LLVM workshop [program]

4) Monday: Torsten co-presents the Advanced MPI Tutorial [program]

5) Monday: Torsten presents at the Early Career Panel how to publish [program]

6) Monday: Salvatore presents his work on SimFS at the PDSW workshop

7) Tuesday: Torsten presents the sPIN talk at the TiTech booth

8) Tuesday: Torsten talks at the 25 years-of MPI and 20 years of OpenMP celebration at the Intel booth

MPI+MPI or MPI+OpenMP is the question :-).

9) Tuesday: Torsten appears at the SIGHPC annual members meeting as an elected member (slightly late due to the Intel celebration)

10) Tuesday: Konstantin presents his poster Unifying Replication and Erasure Coding to Rule Resilience in KV-Stores at the poster reception

11) Wednesday: Torsten presents the sPIN paper in the technical program

Room was full, unfortunately, the session chair’s clock was wrong, so we started 5 mins early and people streamed in late :-(. Sorry! But that was the smallest which was wrong with this …

12) Wednesday: Salvatore presents his poster on Virtualized Big Data: Reproducing Simulation Output on Demand as ACM SRC semi finalist


13) Thursday: Edgar presents the paper Scaling Betweenness Centrality Using Communication-Efficient Sparse Matrix Multiplication in the technical program

14) Friday: Torsten co-organizes the H2RC workshop

Triple room was packed (~150-200 people during the keynote).

Persistent Collective Operations in MPI-3 for free!

dandelion
source: thewishwall.org

We discussed persistent collectives at the MPI Forum last week. It was a great meeting and the discussions were very insightful. I really like persistent collectives and believe that MPI implementors should support them!

In that context, I wanted to note that implementors can do this easily and elegantly in MPI-3 without any changes to the standard. We used this technique already in 2012 in the paper “Optimization Principles for Collective Neighborhood Communications”. But let me recap the idea here.

The key ingredients are communicators (MPI’s name for immutable process groups) and Info objects. Info objects are a mechanism for users to pass additional information about how he/she will use MPI to the library. Info objects are very similar to pragmas in C/C++. Some info strings are defined by the standard itself but MPI libraries may add arbitrary strings to it.

So one way to specify a persistent collective is now to duplicate the communicator to create a new name, e.g., my_persistent_comm. At this communicator, the user can specify a info object to make specific operations persistent, e.g., mympi_bcast_is_persistent. The MPI library is encouraged to choose a prefix specific to itself (in this case “mympi”).

The library can now set a flag on the communicator that is checked at broadcast calls whether they are persistent. By passing this info object, the user guarantees that the function arguments passed to the specific call (e.g., bcast) on this communicator will always be the same. Thus, the MPI library can make the call specific to the arguments (i.e., implement all optimizations possible for persistence) once it has seen the first invocation of MPI_Ibcast().

This interface is very flexible, one could even imagine various levels of persistence as defined in our 2012 paper: (1) persistent topology (this is implicit in normal and neighborhood collectives), (2) persistent message sizes, and (3) persistent buffer (sizes and addresses). We describe in the paper optimizations for each level. These levels should be considered in any MPI specification effort.

I agree that having some official support for persistence in the standard would be great but these levels and info arguments should at least be discussed as alternative. It seems like big parts of the MPI Forum are not aware of this idea (this is part of why I write this post 😉 ).

Furthermore, I am mildly concerned about feature-inflation in MPI. Adding more and more features that are not optimized because they are not used, because they have not been optimized, because they were not used …. maay not be the best strategy. Today’s MPIs are not great at asynchronous progression of nonblocking collectives, and the performance of neighborhood collectives and MPI-3 RMA is mostly unconvincing. maybe the community needs some time to optimize and use those features. At the 25 years of MPI symposium, it became clear that big parts of the community share a similar concern.

Keep the great discussions up!

What are the real differences between RDMA, InfiniBand, RMA, and PGAS?

I often get the question how the concepts of Remote Direct Memory Access (RDMA), InfiniBand, Remote Memory Access (RMA), and Partitioned Global Address Space (PGAS) relate to each other. In fact, I see a lot of confusion in papers of some communities which discovered these concepts recently. So let me present my personal understanding here; of course open for discussions! So let’s start in reverse order :-).

PGAS is a concept relating to programming large distributed memory machines with a shared memory abstraction that distinguishes between local (cheap) and remote (expensive) memory accesses. PGAS is usually used in the context of PGAS languages such as Co-Array Fortran (CAF) or Unified Parallel C (UPC) where language extensions (typically distributed arrays) allow the user to specify local and remote accesses. In most PGAS languages, remote data can be used like local data, for example, one can assign a remote value to a local stack variable (which may reside in a register) — the compiler will generate the needed code to imlement the assignment. A PGAS language can be compiled seamlessly to target a global load/store system.

RMA is very similar to PGAS in that it is a shared memory abstraction that distinguishes between local and remote memory accesses. RMA is often used in the context of the Message Passing Interface standard (even though it does not deal with passing messages 😉 ). So why then not just calling it PGAS? Well, there are some subtle differences to PGAS: MPI RMA is a library interface for moving data between local and remote memories. For example, it cannot move data into registers directly and may be subject to additional overheads on a global load/store machine. It is designed to be a slim and portable layer on top of lower-level data-movement APIs such as OFED, uGNI, or DMAPP. One main strength is that it integrates well with the remainder of MPI. In the MPI context, RMA is also known as one-sided communication.

So where does RDMA now come in? Well, confusingly, it is equally close to both PGAS and it’s Hamming-distance-one name sibling RMA. RDMA is a mechanism to directly access data in remote memories across an interconnection network. It is, as such, very similar to machine-local DMA (Direct Memory Access), so the D is very significant! It means that memory is accessed without involving the CPU or Operating System (OS) at the destination node, just like DMA. It is as such different from global load/store machines where CPUs perform direct accesses. Similarly to DMA, the OS controls protection and setup in the control path but then removes itself from the fast data path. RDMA always comes with OS bypass (at the data plane) and thus is currently the fastest and lowest-overhead mechanism to communicate data across a network. RDMA is more powerful than RMA/PGAS/one-sided: many RDMA networks such as InfiniBand provide a two-sided message passing interface as well and accelerate transmissions with RDMA techniques (direct data transfer from source to remote destination buffer). So RDMA and RMA/PGAS do not include each other!

What does this now mean for programmers and end-users? Both RMA and PGAS are programming interfaces for end-users and offer several higher-level constructs such as remote read, write, accumulates, or locks. RDMA is often used to implement these mechanisms and usually offers a slimmer interface such as remote read, write, or atomics. RDMA is usually processed in hardware and RMA/PGAS usually try to use RDMA as efficiently as possible to implement their functions. RDMA programming interfaces are often not designed to be used by end-users directly and are thus often less documented.

InfiniBand is just a specific network architecture offering RDMA. It wasn’t the first architecture offering RDMA and will probably not be the last one. Many others exist such as Cray’s RDMA implementation in Gemini or Aries endpoints. You may now wonder what RoCE (RDMA over Converged Ethernet) is. It’s simply an RDMA implementation over (lossless data center) Ethernet which is somewhat competing with InfiniBand as a wire-protocol while using the same verbs interface as API.

More precise definitions can be found in Remote Memory Access Programming in MPI-3 and Fault Tolerance for Remote Memory Access Programming Models. I discussed some ideas for future Active RDMA systems in Active RDMA – new tricks for an old dog.

How many measurements do you need to report a performance number?

The following figure from the paper “Scientific Benchmarking of Parallel Computing Systems” shows the completion times for multiple identical runs of a tuned version of the high-performance Linpack (HPL) on the same system. It illustrates how important correct measurements are. Here, one may report 77.4 Tflop/s but when repeating the benchmark see as little as 61.2 Tflop/s. It suggests that one should use sound statistics when reporting any performance result.

Computer science is often about measuring computer systems. Be it time, energy, or performance, all these metrics are often non-deterministic in real computer systems and a single measurement may or may not provide a reliable result. So if you are not sloppy when measuring your system, you will measure several executions and report an aggregate measure such as the arithmetic or geometric average or the median. Well, but now the question is: “how many is several”? And this is where it gets less clear.

Typically, “several” is defined very informally, so if the measurement is cheap (such as a network latency measurement), it can be 1,000 or even 1,000,000. If it’s expensive (such as full-scale supercomputer runs), we’re very quickly back to a single measurement. But does it make sense to define the number of measurements based on the execution cost? Of course not — it should depend on the variability of the data! Who would have thought that …?

Unfortunately, most benchmarkers do not take the data variability into account at all in practice. Why not? Isn’t that somewhat clear that one needs to? Yes, it is, but it’s also hard! But actually, it’s not that hard if one knows some basic statistics. The simplest way is to check if one has enough measurements for a given variability in the result. But how to assess the variability? Well, one needs to look at some samples — ah, a catch 22? I need samples to know how many samples I need? Yes, that is true — in fact, the more samples I have, the higher my confidence in the variability and the correctness of my reported number.

A simple technique to assess the confidence of my measurement (we are simplifying this somewhat here) is to compute the confidence interval. Confidence Intervals (CIs)
are a tool to provide a range of values that include the true mean with a given probability p depending on the estimation procedure. So if the measurement is 1 second and the 95% CI is the range [0.9;1.1] then there is a 95% probability that the true mean is within that interval. There are two basic types of CIs: (1) confidence intervals around the mean assuming a normal distribution and (2) nonparametric confidence intervals around the median without assumptions on the distribution. The former CI one is simplest to compute: [mean-t(n-1,p/2)/sqrt(n); mean+t(n-1,p/2)/sqrt(n)] where mean is the arithmetic mean, n is the number of samples, and t(x,p) is student’s t distribution with x degrees of freedom. So it’s easy to see that the interval quickly gets tighter when the number of samples grows. But which computing system generates measurements following a standard distribution, which means that it’s equally likely to become faster than slower. Well, my computers are certainly more often becoming slower than faster leading to a right-skewed distribution.

So how do we get to confidence intervals of non-normally distributed measurements? Well, first of all, if the data is not normally distributed, the average makes little sense as it will be skewed as well. So one usually reports the median (the n/2-th element in the sorted set of all n measurements) as the most likely value to be observed in practice. But how to get to our confidence interval? Since we cannot assume any distribution of the values, we work on the sorted set of measurements and call the rank-i value the ith value in the set. Now we identify rank floor((n-z(p/2)*sqrt(n))/2) to ceil(1+(n+z(p/2)*sqrt(n))/2) as the conservative CI which is commonly asymmetric as well.

So ok, we can now compute this CI as statistical measure of certainty of our reported median. Median what? Don’t we like averages? Well, again, averages are not too useful for non-normally distributed data *unless* you care about only an accumulation of many measurements, i.e., you only want to know how expensive 1,000 iterations are and you do not care about every single one. Well, if this is the case, just measure the 1,000. If you’re well-versed in statistics, you will now recognize the connection to the Central Limit Theorem :-).

But now again, how many measurements do we actually need?? To answer this, we’d first need to define a needed level of certainty, for example 95%. Then, we define an accepted error in our reporting around the median, for example 1%. In other words, we would like to have enough measurements to be 95% sure that the real median is within 1% of our reported value. Hey, so we’re now back to a single reported value just together with a certainty! So how do we achieve this? Well, for normally distributed data in the case (1), one could compute the number of needed measurements. But that doesn’t work with real computers, so let’s skip this here. In the nonparametric case, no explicit formula is known to us, so we would need to recompute the confidence interval after each measurement (or a set of measurements) and we could stop measuring once the 95% CI is within the 1% interval around the mean.

Wow, so now we know how to *really* measure and report performance! In fact, in practice, we often need less than 1,000 measurements to reach a tight interval with high confidence. So if they’re cheap, we can as well do them and check afterwards of the statistics make sense. But what if we are running out of benchmarking budget before we reach the required accuracy — for example, each measurement takes a day and we only have four days but after four days, the CI is still wider than we’d like it to be? Well, bad luck! In that case, we can only report the wide CI and leave it up wo the reader/observer to conclude if our measurements make sense in his context.

I wish you happy (and correct) measuring! Torsten Hoefler

This blog post summarizes a part of the paper “Scientific Benchmarking of Parallel Computing Systems” which appeared at IEEE/ACM Supercomputing 2015. The full paper provides more insight and references around this topic and also the equation for the number of measurements assuming a normal distribution. The paper also establishes more rules for sound performance analyses that I may blog on later. Spread the word and cite the paper if you find these rules useful :-).

How to meet a paper deadline

Science is all about producing knowledge and insights and communicating both to other scientists (or industry). The main medium of communication are papers, talks, and increasingly social media (twitter, blogs, etc.). The most important and impactful are still scientific papers but they can often be strengthened by the other communication media.

In computer science (CS), serious publication venues are almost always conferences that happen at particular times each year. These come with submission deadlines set in order to allow enough time for a review cycle. Such deadlines are strict, meaning that you’re either in or out. I personally believe that deadlines are a great way to accelerate research because they create a specific goal to work towards, ,wrap up and document results. However, the binary nature of deadlines can lead to frustration and requires careful planning to meet them. I’ll now summarize seven rules and techniques I learned (partially the hard way) while hunting hundreds of deadlines as a student, group leader, and professor together with my students.

1) plan early: Have a complete plan ready months before, it will change, but you need a plan. Start with an outline and milestones. Ask questions: What are the key points, how do I explain or show them? What experiments do I need? How long will they take? How do I communicate the idea most efficiently (think about analogies and good examples)? Of course, you need the key idea set at the beginning. I suggest starting to plan 2-4 months before the real deadline.

2) start writing immediately: As early as possible (while doing the research), write down everything. A good researcher always documents his ideas, thoughts, and experiments; he’s always writing. Distill the key points into a working draft. This draft is not wasted, it can be used to extract a conference publication and it can be published as a technical report to provide more information. You should always document what you do.

3) test early: In CS, experiments most likely require some code. While developing this code, test it. Test it in the final configuration. Do not rely on “I think it’s good” until it’s too late. Ideally, develop small regression tests. Always validate simulations and emulations at the beginning. You’ll need this anyway and you don’t want to run everything twice.

4) set a hard deadline: This is the most important point. You need a HARD deadline sufficiently long before the real deadline. You need to be absolutely serious about meeting this deadline at any cost, work through weekends and nights etc.. I’d recommend one or two weeks before the real deadline. This provides buffer and will reduce stress. Ideally, you’ll do nothing (or not much) between this deadline and the real deadline. This gives you the opportunity to make the paper great. In the worst case, you find a major problem and need to work through until the real deadline. Yet, this is less stressful than realizing 24 hours before the deadline that there is a major issue. Remember: set it, be serious, and stick to it at any cost.

5) take it serious: Meet your own deadline. Seriously. There is always a next deadline and working on something else but this hard task is always more attractive. But deadlines are often only once a year, missing them can have a serious impact on your career.

6) prioritize and tradeoff: It’s never possible to do everything you think of to perfection. So decide what is most important and set deadlines for milestones and in the worst case meet them by simplifying the goal. Never never never tradeoff scientific integrity!!

7) manage your collaborators: Keep them involved make them see your progress. Make sure they always know how they can help. Pull rather than push, i.e., show that you’re working hard and hope that their honor will drive them. Avoid collaborations where this shows no effect. Do not wait, work and help, minimize dependencies. I have seen cyclic waiting before. Agree on milestones and deadlines (including the hard one) in advance.

8) focus when it gets tight: If it looks like you may not be able to meet your own deadline (which is of course well in advance of the actual deadline) then focus. Cut everything non-essential such as group meetings, talks, chats, excuse yourself from teleconferences etc. (your peers will understand). I strive for a two-week advance personal deadline and begin to cut heavily when it gets tight three weeks before the actual deadline.

Planning is key and the main tools are milestones and self-set deadlines (to be taken seriously). You know that you failed if you have to work very hard the week or day before the deadline (you should of course always work hard, but voluntarily 🙂 ).

11 SPCL@ETH activities at SC14

The Intl. Supercomputing (SC) conference is clearly the main event in HPC. It’s program is broad and more than 10k people attend annually. SPCL is mainly focused on the technical program which makes SC the top-tier conference in HPC. It is the main conference of a major ACM SIG (SIGHPC).

This year, SPCL members co-authored three technical papers in the very competitive program with several thousand attendees! One was even nominated for the best paper award — and to take it upfront, we got it! Congrats Maciej! All talks were very well attended (more than 100 people in the room).

All of these talks were presented by collaborators, so I was hoping to be off the hook. Well, not quite, because I gave seven (7!) invited talks at various events and participated in teaching a full-day tutorial on advanced MPI. The highlight was a keynote at the LLVM workshop. I was also running around all the time because I co-organized the overall workshop program (with several thousand attendees) at SC14.

So let me share my experience of all these exciting events in chronological order!

1) Sunday: IA3 Workshop on Irregular Applications: Architectures & Algorithms

This workshop was very nice. Kicked off by top-class keynotes from Onur Mutlu (CMU) and Keshav Pingali (UT) through great paper talks and a panel in the afternoon. I served on the panel with some top-class people and it was a lot of fun!


Giving my panel presentation on accelerators for graph computing.


Arguing during the panel discussion (Hadoop right now) with (left to right): Keshav Pingali (UT Austin), John Shalf (Berkeley), me (ETH), Clayton Chandler (DOD), Benoit Dupont de Dinechin (Kalray), Onur Mutlu (CMU, Maya Gokhale (LLNL). A rather argumentative group :-).

My slides can be found here.

2) Monday – LLVM Workshop

It was long overdue to discuss the use of LLVM in the context of HPC. So thanks to Hal Finkel and Jeff Hammond for organizing this fantastic workshop! I kicked it off with some considerations about runtime-recompilation and how to improve codes.

The volunteers counted around 80 attendees in the room! Not too bad for a workshop. My slides on “A case for runtime recompilation in HPC” are here.

3) Monday – Advanced MPI Tutorial

Our tutorial attendee numbers keep growing! More than 67 people registered but it felt like more were showing up for the tutorial. We also released the new MPI books, especially the “Using Advanced MPI” book which shortly after became the top new release on Amazon in the parallel processing category.

4) Tuesday – Graph 500 BoF

There, I released the fourth Green Graph 500 list. Not much new happened on the list (same as for the Top500 and Graph500) but the BoF
was still fun! Peter Kogge presented some interesting views on the data of the list. My slides can be found here.

5) Tuesday – LLVM BoF

Concurrently with the Graph 500 BoF was the LLVM BoF, so I had to speak at both at the same time. Well, that didn’t go too well (I’m still only one person — apologies to Jim). I only made 20% of this BoF but it was great! Again, very good turnout, LLVM is certainly becoming more important every year. My slides are here.

6) Tuesday – Simulation BoF

There are many simulators in HPC! Often for different purposes but also sometimes for similar ones. We discussed how to collaborate and focus our efforts better. I represented LogGOPSim, SPCL’s discrete event simulator for parallel applications.

My talk summarized features and achievements and slides can be found here.

7) Tuesday – Paper Talk “Slim Fly: A Cost Effective Low-Diameter Network Topology”

Our paper was up for Best Student Paper and Maciej did a great job presenting it. But no need to explain, go and read it here!


Maciej presenting the paper! Well done.

8) Wednesday – PADAL BoF – Programming Abstractions for Data Locality

Programming has to become more data-centric as architectures evolve. This BoF followed an earlier workshop in Lugano on the same topic. It was great — no slides this time, just an open discussion! I hope I didn’t upset David Padua :-).


Didem Unat moderated and the panelists were — Paul Kelly (Imperial), Brad Chamberlain (Cray), Naoya Maruyama (TiTech), David Padua (UIUC), me (ETH), Michael Garland (NVIDIA). It was a truly lively BoF :-).

But hey, I just got it in writing from the Swiss that I’m not qualified to talk about this topic — bummer!


The room was packed and the participation was great. We didn’t get to the third question! I loved the education question, we need to change the way we teach parallel computing.

9) Wednesday – Paper Talk “Understanding the Effects of Communication and Coordination on Checkpointing at Scale”

Kurt Ferreira, a collaborator from Sandia was speaking on unexpected overheads of uncoordinated checkpointing analyzed using LogGOPSim (it’s a cool name!!). Go read the paper if you want to know more!


Kurt speaking.

10) Thursday – Paper Talk “Fail-in-Place Network Design: Interaction between Topology, Routing Algorithm and Failures”

Presented by Jens Domke, a collaborator from Tokyo Tech (now at TU Dresden). A nice analysis of what happens to a network when links or routers fail. Read about it here.


Jens speaking.

11) Thursday – Award Ceremony

Yes, somewhat unexpectedly, we go the best student paper award. The second major technical award in a row for SPCL (after last year’s best paper).


Happy :-).

Coverage by Michele @ HPC-CH and Rich @ insideHPC.

The MPI 3.0 Book – Using Advanced MPI

Our book on “Using Advanced MPI” will appear in about a month — now it’s the time to pre-order on Amazon at a reduced price. It is released by the prestigious MIT Press, a must read for parallel computing experts.

The book contains everything advanced MPI users need to know. It presents all important concepts of MPI 3.0 (including all newly added functions such as nonblocking collectives and the largely extended One Sided functionality). But the key is that the book is written in an example-driven style. All functions are motivated with use-cases and working code is available for most. This follows the successful tradition of the “Using MPI” series lifting it to MPI-3.0 and hopefully makes it an exciting read!

David Bader’s review hits the point

With the ubiquitous use of multiple cores to accelerate applications ranging from science and engineering to Big Data, programming with MPI is essential. Every software developer for high performance applications will find this book useful for programming on modern multicore, cluster, and cloud computers.

Here is a quick overview of the contents:

Section 1: “Introduction” provides a brief overview of the history of MPI and briefly summarizes the basic concepts.

Section 2: “Working with Large Scale Systems” contains examples of how to create highly-scalable systems using nonblocking collective operations, the new distributed graph topology for MPI topology mapping, neighborhood collectives, and advanced communicator creation functions. It equips readers with all information to write codes that are highly-scalable. It even describes how fault-tolerant applications could be written using a high-quality MPI implementation.

Section 3: “Introduction to Remote Memory Operations” is a gentle and light introduction to RMA (One Sided) programming using MPI-3.0. It starts with the concepts of memory exposure (windows) and simple data movement. It presents various example problems followed by practical advice to avoid common pitfalls. It concludes with a discussion on performance.

Section 4: “Advanced Remote Memory Access” will make you a clear expert in RMA programming, it covers advanced concepts such as passive target mode, allocating MPI windows using various examples. It also discusses memory models and scalable synchronization approaches.

Section 5: “Using Shared Memory with MPI” explains MPI’s strategy to shared memory. MPI-3.0 added support for allocating shared memory which essentially enables the new hybrid programming model “MPI+MPI“. This section explains guarantees that MPI provides (and what it does not provide) and several use-cases for shared memory windows.

Section 6: “Hybrid Programming” provides a detailed discussion on how to use MPI in cooperation with other programming models, for example threads or OpenMP. Hybrid programming is emerging to a standard technique and MPI-3.0 introduces several functions to ease the cooperation with others.

Section 7: “Parallel I/O” is most important in the future Big Data world. MPI provides a large set of facilities to support operations on large distributed data sets. We discuss how MPI supports contiguous and noncontiguous accesses as well as the consistency of file operations. Furthermore, we provide hints for improving the performance of MPI I/O.

Section 8: “Coping with Large Data” once Big Data sets are in main memory, we may need to communicate them. MPI-3.0 supports handling large data (>2 GiB) through derived datatypes. We explain how to enable this support and limitations of the current interface.

Section 9: “Support for Performance and Correctness Debugging” is addressed at very advanced programmers as well as tool developers. It describes the MPI tools interface which allows to introspect internals of the MPI library. Its flexible interface supports performance counter and control variables to influence the behavior of MPI. Advanced expert programmers will love this interface for architecture-specific tuning!

Section 10: “Dynamic Process Management” explains how processes can be created and managed. This feature enables growing and shrinking of MPI jobs during their execution and fosters new programming paradigms if it is supported by the batch systems. We only discuss the MPI part in this chapter though.

Section 11: “Working with Modern Fortran” is a must-read for Fortran programmers! How does MPI support type-safe programming and what are the remaining pitfalls and problems in Fortran?

Section 12: “Features for Libraries” addresses advanced library writers and described principles how to develop portable high-quality MPI libraries.

ExaMPI’13 Workshop at SC13

I wanted to highlight the ExaMPI’13 workshop at SC13. It was a while ago but it is worth reporting!

The workshop’s theme was “Exascale MPI” and the workshop addressed several topics on how to move MPI to the next big divisible-by-10^3 floating point number. Actually, for Exascale, it’s unclear if it’s only FLOPs, maybe it’s data now, but then, we easily have machines with Exabytes :-). Anyway, MPI is an viable candidate to run on future large-scale machines, maybe at a low level.

A while ago, some colleagues and I summarized the issues that MPI faces in going to large scale: “MPI on Millions of Cores“. The conclusion was that it’s possible to move forward but some non-scalable elements need to be removed or avoided in MPI. This was right on topic for this workshop, and indeed, several authors of the paper were speaking!

The organizers invited me to give a keynote to kick off the event. I was talking about large-scale MPI and large-scale graph analysis and how this could be done in MPI. [Slides]

The very nice organizers sent me some pictures that I want to share here:


My keynote on large-scale MPI and graph algorithms.


The gigantic room was well filled (I’d guess more than 50 people).


Jesper talking about the EPIGRAM project to address MPI needs for the future large scales.


The DEEP strategy of Julich using inter-communicators (the first users I know of).


Pavan on our heterogeneous future, very nice insights.

All in all, a great workshop with a very good atmosphere. I received many good questions and had very good discussions afterwards.

Kudos to the organizers!

Emerging Technologies ramping up at SC13

A new element in this year’s Supercomputing SC13 conference, Emerging Technologies, is emerging at SC13 right now. The booth of impressive size (see below) features 17 diverse high-impact projects that will change the future of Supercomputing!

et_booth_bringup

Emerging Technologies (ET) is part of the technical program and all proposals have been reviewed in an academically rigorous process. However, as opposed to the standard technical program, ET will be located at the main showfloor (booth #3547). This enables to demonstrate technologies and innovations that would otherwise not reach the showfloor.

The standing exhibit is complemented by a series of short talks about the technologies. Those talks will be during the afternoons on Tuesday and Wednesday in the neighboring “HPC Impact Showcase” theater (booth #3947).

Check out http://sc13.supercomputing.org/content/emerging-technologies for the booth talks program!

Bob Lucas and I have been organizing the technical exhibit this year and Satoshi Matsuoka will run it for SC14 :-).

So make sure to swing by if you’re at SC13. It’ll definitely be a great experience!

You know that a program committee failed if …

I had the worst experience with conference reviews in my short scientific career (I’ll not name names but it’s an “A” ranked conference with a reasonable reputation, you better ask me over a beer). I’m trying to take it with humor and share some of the funniest parts here.

So, you know that a program committee failed if …

  1. You receive a one-paragraph review which goes like (this is an original citation, only the scheme has been replaced to guarantee anonymity of the venue):

    “This paper proposes [technique X]. It is a good idea to use [X]. However, it is difficult to understand that [X works in context Y].”

    Yes, that’s it! The final evaluation was a weak accept.

  2. You submit a paper on a programming environment for HPC and you get a comment like:

    “The importance to the field is fair because programmers are easily able to exploit the optimizations to achieve the better execution time of real applications because the optimization can be reused through standard MPI API and the authors showed the speed-up of real applications including [application X].”

    Yes, the system is considered bad if it’s easy to use, portable and backwards compatible :-). Reminds me of “Parallel machines are hard to program and we should make them even harder – to keep the riff-raff off them.”

  3. Your paper receives the scores accept, accept, weak accept, and reject with three reasonable, in fact nice, and encouraging reviews. The reject review is completely unreasonable and it criticizes the writing style while having at least one or two English mistakes in *every single sentence* :-).
  4. You receive (3) and a completely unnecessary and offensive sentence at the end of the reject review which says “The only good thing in this paper is [X]” where [X] is absolutely unrelated (and in fact not even existing or reasonably conceivable).
  5. You received (4) and rebutted the hell out of this completely unreasonable review (which wasn’t even consistent in itself in addition to being offensive). Assume the rebuttal took you a day since you had to interpret the review’s twisted English and strange criticisms and rebut it in a technical and polite way (which seems hard); AND the rebuttal was *completely* ignored, i.e., neither the review was updated nor did you receive a note from the chair about what happened.

  6. You call up some friends who attended the TPC meeting and (1)-(5) are reinforced.

So after all, there is now one conference more that I may not recommend to anyone for a while. On the other hand, I may be spoiled since I received just absolutely outstanding reviews for the submissions before that (where not all were accepted, but most :-)).