Greg presented our paper on automatic complexity analysis for parallel programs at ACM SPAA 2014. It was a great presentation!
Andrea presents our paper on how to achieve bitwise reproducibility for floating point arithmetic on any architecture for cheap (nearly free). See more details in the paper.
So we did it again, we celebrated Greg’s, Maciej’s, and Bogdan’s recent successes with a barbecue. But we were not SPCL if it was any barbecue … of course it was at the top of a mountain, this time the “Grosse Mythen”.
Remember v1.0 to Mount Rigi? Switzerland is absolutely awesome! Mountains, everything is green, ETH .
Here are some impressions:
The complete tour was 6.9 kms a total walking time of 3.35 hours and an altitude difference of only 750m (from 1149m to 1899m). Much shorter and less altitude then our last trip. But not less fun! And yes, my GPS was a bit off :-/.
After all, *awesome* and very efficient, the whole tour took 7.5 hours . So we set some new standards for the SPCL hike v 3.0!
I wanted to highlight the ExaMPI’13 workshop at SC13. It was a while ago but it is worth reporting!
The workshop’s theme was “Exascale MPI” and the workshop addressed several topics on how to move MPI to the next big divisible-by-10^3 floating point number. Actually, for Exascale, it’s unclear if it’s only FLOPs, maybe it’s data now, but then, we easily have machines with Exabytes . Anyway, MPI is an viable candidate to run on future large-scale machines, maybe at a low level.
A while ago, some colleagues and I summarized the issues that MPI faces in going to large scale: “MPI on Millions of Cores“. The conclusion was that it’s possible to move forward but some non-scalable elements need to be removed or avoided in MPI. This was right on topic for this workshop, and indeed, several authors of the paper were speaking!
The organizers invited me to give a keynote to kick off the event. I was talking about large-scale MPI and large-scale graph analysis and how this could be done in MPI. [Slides]
The very nice organizers sent me some pictures that I want to share here:
My keynote on large-scale MPI and graph algorithms.
The gigantic room was well filled (I’d guess more than 50 people).
Jesper talking about the EPIGRAM project to address MPI needs for the future large scales.
The DEEP strategy of Julich using inter-communicators (the first users I know of).
Pavan on our heterogeneous future, very nice insights.
All in all, a great workshop with a very good atmosphere. I received many good questions and had very good discussions afterwards.
Kudos to the organizers!
This year, I attended my first HiPEAC conference. We had a paper at the main track. It was in Vienna and thus really easy to reach (1 hour by plane). I actually thought about commuting from Zurich every day.
Bogdan presented the paper and did a very good job! I received several positive comments afterwards. Kudos Bogdan!
Supercomputing is the premier conference in high performance and parallel computing. With more than 10,000 attendees, it’s also the largest and highest-impact conference in the field. Literally everybody is there. Torsten summarized the role of the conference in a blog poster over at CACM.
SPCL had a great year at Supercomputing 2013 (SC13)! We’ve been involved in multiple elements of the technical program:
- Three paper talks (Alexandru, Andrew, Robert) in the technical papers program
- Two posters (Aditya, Robert) in the posters program
- SPCL researchers received three awards
- Torsten gave a keynote at the ExaMPI’13 workshop
- Torsten co-presented a well-attended (~50 people) tutorial on advanced MPI programming
- Torsten revealed the 2nd Green Graph 500 List at the Graph 500 BoF
- Torsten co-organized the Emerging Technologies program
Here are some impressions:
All in all – a great success! Congrats and thanks to everyone who contributed!
Some more nice pictures can be found here.
A new element in this year’s Supercomputing SC13 conference, Emerging Technologies, is emerging at SC13 right now. The booth of impressive size (see below) features 17 diverse high-impact projects that will change the future of Supercomputing!
Emerging Technologies (ET) is part of the technical program and all proposals have been reviewed in an academically rigorous process. However, as opposed to the standard technical program, ET will be located at the main showfloor (booth #3547). This enables to demonstrate technologies and innovations that would otherwise not reach the showfloor.
The standing exhibit is complemented by a series of short talks about the technologies. Those talks will be during the afternoons on Tuesday and Wednesday in the neighboring “HPC Impact Showcase” theater (booth #3947).
Check out http://sc13.supercomputing.org/content/emerging-technologies for the booth talks program!
Bob Lucas and I have been organizing the technical exhibit this year and Satoshi Matsuoka will run it for SC14 .
So make sure to swing by if you’re at SC13. It’ll definitely be a great experience!
Today I typed the last command on my long-running server (serving www.unixer.de since 2006 until yesterday):
benten ~ $ uptime 01:31:47 up 676 days, 16:20, 5 users, load average: 2.08, 1.42, 1.39 benten ~ $ dd if=/dev/zero of=/dev/hda & benten ~ $ dd if=/dev/zero of=/dev/hdb & benten ~ $ dd if=/dev/zero of=/dev/hdc &
This machine was an old decommissioned cluster node (well, a result of
combining two half-working nodes) and served me since 2006 (seven
years!) very well. Today, it was shut off.
It’s nearly historic (single-core!):
benten ~ $ cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 15 model : 1 model name : Intel(R) Pentium(R) 4 CPU 1.50GHz stepping : 2 cpu MHz : 1495.230 cache size : 256 KB fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi m mx fxsr sse sse2 ss ht tm up pebs bts bogomips : 2995.24 clflush size : 64 power management: benten ~ $ free total used free shared buffers cached Mem: 775932 766372 9560 0 85720 273792 -/+ buffers/cache: 406860 369072 Swap: 975200 57904 917296 benten ~ $ fdisk -l Disk /dev/hda: 20.0 GB, 20020396032 bytes Disk /dev/hdb: 500.1 GB, 500107862016 bytes Disk /dev/hdc: 80.0 GB, 80026361856 bytes Disk /dev/hdd: 500.1 GB, 500107862016 bytes benten ~ $ cat /proc/mdstat Personalities : [raid1] md0 : active raid1 hdb1 hdd1(F) 488383936 blocks [2/1] [U_]
Pavan Balaji, Jim Dinan, Rajeev Thakur and I are giving our Advanced MPI Programming tutorial at Supercomputing 2013 on Sunday November 17th.
Are you wondering about the new MPI-3 standard? How it affects you as a scientific or HPC programmer and what nice new features you can use to make your life easier and your application faster? Then you should not miss our tutorial.
Our abstract summarizes the main topics:
The vast majority of production parallel scientific applications today use MPI and run successfully on the largest systems in the world. For example, several MPI applications are running at full scale on the Sequoia system (on ?1.6 million cores) and achieving 12 to 14 petaflops/s of sustained performance. At the same time, the MPI standard itself is evolving (MPI-3 was released late last year) to address the needs and challenges of future extreme-scale platforms as well as applications. This tutorial will cover several advanced features of MPI, including new MPI-3 features, that can help users program modern systems effectively. Using code examples based on scenarios found in real applications, we will cover several topics including efficient ways of doing 2D and 3D stencil computation, derived datatypes, one-sided communication, hybrid (MPI + shared memory) programming, topologies and topology mapping, and neighborhood and nonblocking collectives. Attendees will leave the tutorial with an understanding of how to use these advanced features of MPI and guidelines on how they might perform on different
platforms and architectures.
This tutorial is about advanced use of MPI. It will cover several advanced features that are part of
MPI-1 and MPI-2 (derived datatypes, one-sided communication, thread support, topologies and topology
mapping) as well as new features that were recently added to MPI as part of MPI-3 (substantial additions
to the one-sided communication interface, neighborhood collectives, nonblocking collectives, support for
Implementations of MPI-2 are widely available both from vendors and open-source projects. In addition,
the latest release of the MPICH implementation of MPI supports all of MPI-3. Vendor implementations
derived from MPICH will soon support these new features. As a result, users will be able to use in practice
what they learn in this tutorial.
The tutorial will be example driven, reflecting scenarios found in real applications. We will begin with
a 2D stencil computation with a 1D decomposition to illustrate simple Isend/Irecv based communication.
We will then use a 2D decomposition to illustrate the need for MPI derived datatypes. We will introduce
a simple performance model to demonstrate what performance can be expected and compare it with actual
performance measured on real systems. This model will be used to discuss, evaluate, and motivate the rest
of the tutorial.
We will use the same 2D stencil example to illustrate various ways of doing one-sided communication in
MPI and discuss the pros and cons of the different approaches as well as regular point-to-point communica-
tion. We will then discuss a 3D stencil without getting into complicated code details.
We will use examples of distributed linked lists and distributed locks to illustrate some of the new ad-
vanced one-sided communication features, such as the atomic read-modify-write operations.
We will discuss the support for threads and hybrid programming in MPI and provide two hybrid ver-
sions of the stencil example: MPI+OpenMP and MPI+MPI. The latter uses the new features in MPI-3 for
shared-memory programming. We will also discuss performance and correctness guidelines for hybrid pro-
We will introduce process topologies, topology mapping, and the new “neighborhood” collective func-
tions added in MPI-3. These collectives are particularly intended to support stencil computations in a scalable
manner, both in terms of memory consumption and performance.
We will conclude with a discussion of other features in MPI-3 not explicitly covered in this tutorial
(interface for tools, Fortran 2008 bindings, etc.) as well as a summary of recent activities of the MPI Forum
Our planned agenda for the day is
- Introduction (8.30–10.00)
- Background: What is MPI
- MPI-1, MPI-2, MPI-3
- 2D stencil code with 1D decomposition: Isend/Irecv version
- 2D stencil code with 2D decomposition: Introduce derived datatypes
- Introduce simple performance modeling and measurement
- One-Sided Communication (10.30–12.00)
- Basics of one-sided communication or remote memory access (RMA)
- 2D stencil code with 1D decomposition: RMA with 3 forms of synchronization
- 3D stencil: What changes and what to pay attention to
- Introduce other features of MPI-3 RMA
- Linked list or distributed lock example demonstrating new MPI-3 RMA features
- Lunch (12.00–1.30)
- MPI and Threads (1.30–3.00)
- What does the MPI standard specify about threads
- How does it enable hybrid programming
- Hybrid (MPI+OpenMP) version of 2D stencil code
- Hybrid (MPI+MPI) version of 2D stencil code using MPI-3 shared-memory support
- Performance and correctness guidelines for hybrid programming
- Topologies, Neighborhood/Nonblocking Collectives (3.30-5.00)
- Topologies and topology mapping
- 2D stencil code with 2D decomposition using neighborhood collectives
- MPI-3 nonblocking collectives with example
- Summary of other features in MPI-3
- Summary of recent activities of the MPI Forum
We’re looking forward to many interesting discussions!
EuroMPI is a very nice conference for the specialized sub-field MPI, namely the Message Passing Interface. I’m a long-term attendee since I’m working much on MPI and also standardization. We had a little more than 100 attendees this year in Madrid and the organization was just outstanding!
We were listening to 25 paper talks and five invited talks around MPI. For example Jesper Traeff, who discussed how to generalize datatypes towards collective operations:
Or Rajeev Thakur, who explained how we get to Exascale and that MPI is essentially ready:
Besides the many great talks, we also had some fun, like the city walking tour organized by the conference
the evening reception, a very nice networking event
or more networking in the Retiro park
followed by the traditional dinner.
On the last day, SPCL’s Timo Schneider presented our award-winning paper on runtime compilation for MPI datatypes
with a provocative start (there were many vendors in the room
but an agreeing end.
The award ceremony followed right after the talk.
The conference was later closed by the announcement of next year, when EuroMPI will move to Japan (for the first time outside of Europe).
After all, a very nice conference! Kudos to the organizers.
The one weird thing about Madrid though … I got hit in the face by a random woman in the subway on my way back. Looks like she claimed I had stolen her seat (not sure why/how that happened and many other seats were empty) but she didn’t speak English and kept swearing at me. Weird people!