Nue Routing: fast, 100% fault-tolerant, 100% applicable, 100% deadlock-free

The OFA just released a new Open Subnet Manager version (v3.3.21) for InfiniBand, including many interesting features:

  • Support for HDR link speed and 2x link width
  • New routing algorithm: Nue routing
  • Support for ignoring of throttled links for Nue [1,2] and (DF)SSSP [3,4] routing
  • …and many more internal enhancements to OpenSM.

Nue Routing

Deadlock-freedom in general, but also the limited amount of virtual channels provided in modern interconnects, has been a long-standing problem for network researchers and engineers.
Nue routing is not just yet another new algorithm for statically routed high-performance interconnects, but a revolutionary step with respect to deadlock-freedom and fault-tolerance.

Our goal was to combine advantages of existing routing algorithms, primarily the flexibility of Up/Down routing and outstanding global path balancing of SSSP routing [5], while guaranteeing deadlock-freedom regardless of number of virtual channels/lanes or network type or size.
The incarnation of this effort, called Nue routing, derived from the legendary Japanese chimera, is the first algorithm capable of delivering high throughput, low latency, fast path calculation, and 100% guaranteed deadlock-freedom for any type of topology and network size.
All of this is enabled by the fundamental switch from calculating the routing within a graph representing the network to a new graph representation: the complete channel dependency graph.

Without going into detail about the inner workings, which can be found in our HPDC’16 publication [1] and Jens’ dissertation [2; Chapter 6], we will highlight Nue’s capabilities with the next two figures.

The figure below compares many existing routing algorithms of the OpenSM (we excluded MinHop and DOR, since these are only deadlock-free under certain constraints) to our Nue routing for a variety of network topologies, hosting roughly between 1000 and 2000 compute nodes each.
We have been using a cycle-accurate InfiniBand simulator to obtain these results.
Each bar represents the simulated communication throughput for a MPI_Alltoall operation (2KB payload per node) executed on all compute nodes of the topology, and hence a pretty accurate estimate of the capabilities of the network and how well the routing is able to utilize the available resources.
For many subgraphs only a subset of OpenSM’s routing engines are shown alongside Nue, because we filtered instances where the routing engine was not able to create valid routing tables.
Above each bar we list the amount of virtual channels this routing will consume to achieve a deadlock-free routing configuration.
Furthermore, the achievable network throughput under the given traffic pattern is shown for Nue routing with different numbers of virtual channels, ranging from 1 (equivalent to the absense of VCs) to 8.

nue-perf

In summary, the figure shows that Nue routing is competitive to the best performing routing for each individual topology, and offers between 84% for the 10-ary 3-tree and 121% throughput for the Cascade network in comparison.
Occasionally, depending on the given number of virtual channels, Nue is able to outperform the best competitor.
While our original design goals never included the ambition to beat each and every other routing on its home turf, we are glad to see that we can outperform most of them given a sufficient number of channels.
However, this figure also demonstrates the high flexibility w.r.t the given number of channels.
Take for example the Kautz network (left; middle row), were Nue can create a decent deadlock-free routing configuration without virtual channels, while DFSSSP needs 8 VCs and LASH needs at least 5 VCs, but Nue is also able to outperform both with just 5 VCs.

The next figure demonstrates Nue’s fault-tolerance as well as the relatively fast path calculation in comparison to other topology-agnostic routing engines (DFSSSP/LASH) and the topology-aware Torus2QOS engine.
For this test we used regular 3D tori networks of different sizes and randomly injected 1% switch-to-switch link failures into each topology.
The runtime for calculating all n-to-n paths in the network was measured for each routing engine and plotted, but only in cases where the engine was capable of producing a valid routing within the realistic 8VC constraint.

nue-runtime

Thanks to its O(n2 * log n) runtime complexity and efficient implementation, Nue is starting to outperform DFSSSP and LASH with respect to runtime already for relatively small tori.
But more importantly, Nue can always create deadlock-free routing tables, while all other engines (even the semi-fault-tolerant and topology-aware Torus2QOS) eventually fail for larger networks.

Overall the advantages of Nue routing are manifold:

  • Allowing “fire-and-forget” approach for network administration, i.e., works 100% regardless of network failures which is ideal for fail-in-place networks
  • Low runtime and memory complexity (O(n2 * log n) and O(n2), respectively)
  • Guaranteed deadlock-freedom and highly configurable in terms of VC usage
  • VCs not necessary for deadlock-freedom, which extends possible application to NoC and other interconnects which don’t support virtual channels
  • Completely topology-agnostic and yet very good path balancing under the given deadlock-freedom constraint
  • Support for QoS and deadlock-freedom simultaneously (both realized in InfiniBand via VCs)
  • Theoretically applicable to other (HPC) interconnects: RoCEv2, NoC, OPA, …

and everyone can now test and use Nue routing with the opensm v3.3.21 release by either choosing it via command line option:

--routing_engine nue   [and optionally: --nue_max_num_vls <given #VCs>]

or via OpenSM configuration file:

routing_engine nue
nue_max_num_vls <given #VCs>

The default nue_max_num_vls for Nue is assumed to be equal to 1 to enforce deadlock-freedom even if QoS is not enabled.

For less advantageous admins ☺, or systems with specifically optimized routing, we still recommend to always use Nue as fallback (in case the primary routing fails) via:

routing_engine <primary>,nue

to ensure maximum fault-tolerance and uninterrupted operation of the system until the hardware failures are fixed (which is definitely better than the default fallback behavior to the deadlock-prone MinHop by OpenSM).

A more detailed description of OpenSM’s options for Nue is provided in the documentation and for more fine-grained control over the virtual channel configuration we recommend to read our previous blog post for the DFSSSP routing engine.
(Note: it is HIGHLY advised to install/use the METIS library with OpenSM (enforced via --enable-metis configure flag when building OpenSM) for improved path balancing in Nue.)

Avoiding throttled links

Our second new feature, we were able to push upstream, is designed to ease the job of system admins in case of temporary or long-term link degradation.

More often than one would wish, one or multiple links in large-scale InfiniBand installations get throttled from their intended speed (eg. 100Gbps EDR) to much lower speeds, like 8Gbps SDR.
While this IB feature is designed to keep the fabric and connectivity up, we argue that such a throttled link will be a major bottleneck to all application and storage traffic, and hence should be avoided.
Usually, HPC networks, especially fat-trees, have enough path-redundancy, such that moving all paths from the affected link(s) and distributing them to other links should have less performance degradation effects than keeping the link in low speed.
However, identifying, disabling, and ultimately replacing “bad” cables takes time.

So, we added a check to the SSSP, DFSSSP, and Nue routing engines to identify such degraded links, which prevents these routings from placing any path onto the links, essentially instantly “disabling” the link and issuing a warning in the logs for the system admin.
This feature can be turned on or off in the configuration file of the subnet manager by switching the avoid_throttled_links parameter to TRUE or FALSE, respectively.

Nue and DFSSSP were developed in collaboration between the main developer Jens Domke at the Matsuoka Laboratory, Tokio Institute of Technology, and Torsten Hoefler of the Scalable Parallel Computing Lab at ETH Zurich.
We would like to acknowledge Hal Rosenstock, the maintainer of OpenSM, who is always supportive of new ideas, and we greatly appreciated his comments and help during the integration of Nue into the official OpenSM.

[1]: J. Domke, T. Hoefler and S. Matsuoka: Routing on the Dependency Graph: A New Approach to Deadlock-Free High-Performance Routing
[2]: J. Domke: Routing on the Dependency Graph: A New Approach to Deadlock-Free, Destination-Based, High-Performance Routing for Lossless Interconnection Networks (Dissertation)
[3]: J. Domke, T. Hoefler and W. Nagel: Deadlock-Free Oblivious Routing for Arbitrary Topologies
[4]: Our prev. DFSSSP blog post: DFSSSP: Fast (high-bandwidth) Deadlock-Free Routing for InfiniBand Networks
[5]: T. Hoefler, T. Schneider and A. Lumsdaine: Optimized Routing for Large-Scale InfiniBand Networks

SPCL’s activities at ISC’18

Just a brief overview of SPCL’s (non-NDA) ongoing and upcoming activities at ISC’18.

1) We’re in the middle of the Advanced MPI Tutorial

With Antoni Pena from Barcelona Supercomputing Center, Tweet

2) Wednesday, 26.06., 11:15am, Talk: Automatic compiler-driven GPU acceleration with Polly-ACC

Part of the session “Challenges for Developing & Supporting HPC Applications” organized by Bill Gropp. (Related work)

3) Wednesday, 26.06., 1:45pm, Torsten organizes the session “Data Centric Computing” with speakers Anshu Dubey, Felix Wolf, John Shalf, and Keshav Pingali

4) Thursday, 28.06., 10:00am, Talk: High-level Code Transformations for Generating Fast Hardware
(Megabyte room)

At Post Moore’s Law HPC Computing (HCPM) workshop (Related work)

5) Thursday, 28.06., 12:20pm, Talk: Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis
(Gold 3 room)

At Workshop on the Convergence of Large Scale Simulation and Artificial Intelligence (Related work)

6) Thursday, 28.06., 3:20pm, Talk: A Network Accelerator Programming Interface
(Megabyte room)

At Post Moore Interconnects (Beyond CMOS) Workshop (Related work)

7) Thursday, 28.06., Panel: Performance Analysis and Instrumentation of Networks
(Basalt room)

At International Workshop on Communication Architectures for HPC, Big Data, Deep Learning and Clouds at Extreme Scale (Related work)

8) Friday, 29.06., European Processor Initiative (EPI) Steering Meeting

In addition to these public appearances, we’re involved in many meetings, vendor presentations, booth appearances, and other activities. Meet us around the conference and booths!

SC18′s improved reviewing process – call for papers and comments

Disclaimer: This blog post is not binding for the SC18 submission process. It attempts to explain the background and history of the innovations. For authoritative answers regarding the process, authors MUST refer to the SC18 webpage and FAQ!

What many of us know can also be shown with numbers: The SC conference is the most prestigious conference in High Performance Computing (HPC). It is listed as rank 6 in the “Computing Systems” Category in Google Scholar’s Metrics (with H-index 47 on January 21st 2018). It is only topped by TPDS, FGCS, NSDI, ISCA, and ASPLOS and thus the highest ranked HPC conference! The next one is arguably PPoPP with H-index 37 and rank 20.

The SC conference routinely attracts more than 10,000 attendees and nearly 50% indicated in a representative survey that attending technical presentations was within their top-3 activities. This makes it definitely the HPC conference where speakers reach the largest audience. I speak from experience: my talk at SC17 probably had more than 400 listeners in the audience and its twitter announcement quickly surpassed 10,000 views. So it definitely is the conference where big things start.

This year, I am honored to be SC18′s program chair, with the enormous help of my vice chair Todd Gamblin from LLNL. To make this great conference even greater, especially for authors and readers/attendees, we plan some major changes to the submission process: In addition to rebuttals, we introduce two different types of revisions during the submission. This allows the authors to address reviewer issues right within the paper draft while they may also add new data to support their discoveries. Rebuttals are still possible but will probably become less important because misunderstandings can be clarified right in the draft. Whether the paper is accepted or rejected, the authors will have an improved version. The revision process leads to an increased interaction between the committee and the authors, which eventually will increase the quality of the publications and talks at the conference. The overall process could be described as an attempt to merge the best parts of the journal review process (expert reviewers and revisions) with the conference review process (fixed schedule and quick turnaround).

This process has been tested and introduced to the HPC field by David Keyes and myself at the ACM PASC 2016 conference in Switzerland. We were inspired by top-class conferences in the field of architecture and databases but adopted their process to the HPC community. The established PASC review process motivated the addition of revisions for IPDPS 2018 (through the advocacy of Marc Snir). Now, we introduce similar improvements scaled to the Supercomputing conference series.

The key innovations of the PASC review process were (1) no standing committee (the committee was established by the chairs based on the submissions, similar to a journal); (2) fully double-blind reviews (not even the TPC chairs knew the identity of the authors); (3) short revisions of papers (the authors could submit revised manuscripts with highlighted changes), and (4) expert reviewers (the original reviewers were asked to suggest experts in the topic for a second round of reviews). The results are documented in a presentation and a paper.

My personal highlight was a paper in my area that improved its ranking drastically from the first to the second review because it was largely rewritten during the revision process. In general, the revision seemed highly effective as the statistics show: of the 105 first reviews, 19 improved their score by 1 point, and 2 improved it by two points in the second review. Points ranged from 1 (strong reject) to 5 (strong accept). These changes show how revisions improved many reviewer’s opinions of the papers and turned good papers into great papers. The revision even enabled the relatively high acceptance rate of 27% without compromising quality. The expert reviews also had a significant effect, which is analyzed in detail in the paper.

The Supercomputing conference has a long history and an order of magnitude more submissions and thus a much larger committee with a fixed structure spanning many areas. Furthermore, the conference is aligned to a traditional schedule. All this allows us to only adopt a part of the changes successfully tested at PASC. Luckily, double-blind reviews were already introduced in 2016 and 78% of the attendee survey preferred it over non double blind. Thus, we can focus our attention on introducing the revision process as well as the consideration of expert reviews.

Adopting the revision process to SC was not a simple task because schedules are set years in advance. For example, the deadline cannot be moved earlier than the end of March due to the necessary coordination with other top-class conferences such as ACM HPDC and ACM ICS (which is already tight, but doable, this year). We will also NOT grant the “traditional” one week extension. Let me repeat: there will be NO EXTENSIONS this year (like in many other top-class CS conferences). Furthermore, the TPC meeting has already been scheduled for the beginning of June and could not be moved for administrative reasons. The majority of the decisions have to be made during that in-person TPC meeting. We will also have to stay within the traditional acceptance rates of SC. We conclude that significant positive changes are possible within the limited options.

To fit the revision process into the SC schedule, we allow authors to submit a featherweight revision two weeks after receiving the initial reviews. This is a bit more time than for the rebuttal but may not be enough for a full revision. But the authors are free to prepare it before receiving the reviews. Even in the case of a later rejection I personally believe that improving a paper is useful. Each featherweight revision should be marked up with the changes very clearly (staying within the page limit). The detailed technology is left to the authors. In addition, the limited-length rebuttal could be used to discuss the changes. The authors need to keep in mind that the reviewers will have *very little* time (less than one week before the TPC meeting) to review the featherweight revision. In fact, they will have barely more time than for reviewing a rebuttal. So the more obvious the changes are marked and presented, the better are the chances for a reconsideration by the committee. Furthermore, due to these unfortunate time limitations, we cannot provide a second round of reviews for the featherweight revision (reviewers are free to amend their reviews but we cannot require them to). Nevertheless, we strongly believe that all authors can use this new freedom to improve their papers significantly. We are also trying to provide some feedback on the paper’s relative ranking to the authors if the systems allows this.

During the in-person TPC meeting, the track chairs will moderate the discussion of each paper and rank each in one of the following categories: Accept, Minor Revision, Major Revision, or Reject. An accepted paper is deemed suitable for direct publication in the SC proceedings; we expect the top 3-5% of the submitted papers to fall into that category. A Minor Revision is similar to a shepherded paper and is accepted with minor amendments, pending a final review of the shepherd; we expect about 10% of the submitted papers to fall into this category. This higher-than-traditional number of shepherded papers is consistent with top conferences in adjacent fields such as OSDI, NSDI, SOSP, SIGMOD etc.. The new grade is Major Revision, which invites the authors to submit a majorly changed paper within one month. A major revision typically requires additional results or analyses. We expect no more than 10% of the initial submissions to fall in this category, about 5% will be finally accepted (depending on the final quality). Major revision papers will be reviewed again and a final decision will be made during an online TPC discussion, moderated by the respective track chair. Finally, Rejected papers at any stage will not appear in the SC proceedings.

Regarding expert reviews, we may invite additional reviewers during any stage of the process. Thus, we ask authors to specify all strong conflicts (even people outside the current committee) during the initial submission. Furthermore, we are planning to have reviewers review reviews by the other reviewers to improve the quality of the process in the long run.

At the end of this discussion, let me place a shameless plug for efforts to improve performance interpretability :-) : We hope that the state of performance reporting can be improved at SC18. While many submissions use excellent scientific methods for evaluating performance on parallel computing systems, some can be improved following very simple rules. I made an attempt to formalize a set of basic rules for performance reporting in the SC15 State-of-the-Practice paper “Scientific Benchmarking of Parallel Computing Systems”. I invite all authors to follow these rules to improve their submissions to any conference (they are of course NOT a prerequisite for SC18 but generally useful ;-) ).

We are very much looking forward to work with the technical papers team to make SC18 the best technical program ever and consolidate the leading position of the SC conference series in field of HPC. Please let me or Todd know if you have any comments, make sure to submit your best work to SC18 before March 28, and help us to make SC18 have the strongest paper track ever!

I want to especially thank David Keyes for advice and help during PASC’16, Todd Gamblin for the great support for the organization of SC18, and Bronis de Supinsky for ideas regarding the adoption of the PASC process to the SC18 conference. Most thanks goes to the track chairs and vice chairs that will support the implementation of the process during the SC18 paper selection process (in the order of the tracks): Aydin Buluc, Maryam Mehri Dehnavi, Erik Draeger, Allison Baker, Si Hammond, Madeleine Glick, Lavanya Ramakrishnan, Ioan Raicu, Rob Ross, Kelly Gaither, Felix Wolf, Laura Carrington, Pat McCormick, Naoya Maruyama, Bronis de Supinski, Ashley Barker, Ron Brightwell, and Rosa Badia. And last but not least the 200+ reviewers of the SC18 technical papers program!

SPCL Activities at SC16

After the stress of SC16 is finally over, let me summarize SPCL’s activities at the conference.

In a nutshell, we participated in two tutorials, two panels, the organization of the H2RC workshop, I gave three invited talks and my students and collaborators presented our four papers at the SC papers program. Not to mention the dozens of meetings :-) . Some chronological impressions are below:

1) Tutorial “Insightful Automatic Performance Modeling” with A. Calotoiu, F. Wolf, M. Schulz


2) Panel at Sixth Workshop on Irregular Applications: Architectures and Algorithms (IA^3)

I was part of a panel discussion on irregular vs. regular structures for graph computations.


The opening


Discussions :-)



Audience

3) Tutorial “Advanced MPI” with B. Gropp, R. Thakur, P. Balaji

I was co-presenting the long running successful tutorial on advanced MPI.


The section on collectives and topologies

4) Second International Workshop on Heterogeneous Computing with Reconfigurable Logic (H2RC) with Michaela Blott, Jason Bakos, Michael Lysaght

We organized the FPGA workshop for the second time, was a big success, people were standing in the back of the room. We even convinced database folks (here, my colleague Gustavo Alonso) to attend SC for the first time!


Gustavo’s opening


Full house

5) Invited talk at LLVM-HPC workshop organized by Hal Finkel

I gave a talk about Polly-ACC (Tobias Grosser’s work) at the workshop, quite interesting feedback!


Nice audience


Great feedback

6) Panel at LLVM-HPC workshop

Later, we had a nice panel about what to improve in LLVM to deal with new languages and/or accelerators.

7) SIGHPC annual member’s meeting

As elected member at large, I attended the annual members meeting at SC16.

8) Collaborator Jens Domke from Dresden presented our first paper “Scheduling-Aware Routing for Supercomputers


Huge room, nicely filled.

9) Booth Talk at Tokio Institute of Technology booth

Was an interesting experience :-) . First, you talk to two people, towards the end, there was a crowd. Even though most people missed the beginning, I got very nice questions.

10) Collaborator Bill Tang presented our paper “Extreme Scale Plasma Turbulence Simulations on Top Supercomputers Worldwide

11) SPCL student Tobias Gysi presented our paper “dCUDA: Hardware Supported Overlap of Computation and Communication

12) Collaborator Maxime Martinasso presents our paper “A PCIe Congestion-Aware Performance Model for Densely Populated Accelerator Servers

But as usual, it’s always the informal, sometimes even secret, meetings that make out SC’s experience. The two SPCL students Greg and Tobias did a great job learning and representing SPCL while I was running around between meetings. I am so glad I didn’t have to present any papers this year (i.e., that I could rely on my collaborators and students :-) ). Yet, it’s a bit worrying that my level of business (measured by the number of parallel meetings and overbooked calendar slots) is getting worse each year. Oh well :-) .

Keynote at HPC China and Public lecture at ETH on Scientific Performance Engineering in HPC

In the last two weeks I gave two presentations on scientific performance engineering, a theme that describes best what we do at my lab (SPCL) at ETH. The first lecture was a keynote at HPC China, the largest conference on High-Performance Computing in Asia (and probably the second largest world-wide). I have to say that this was definitely the best conference that I attended this year due to several reasons :-) .


Here an impression from the impressive conference.

Shortly after that, I presented a similar talk at my home university ETH Zurich as the last step in a long process ;-) . It was great as well — the room was packed (capacity ~250) and people who came late even complained that there were not enough seats — well, their fault, there were some in the front :-) .

Here some impressions from this important talk:


My department head Prof. Emo Welzl introducing the talk with some personal connections and overlapping interests


Some were even paying attention!


One of the larger lecture rooms in ETH’s main building

In case you missed it, I gave a longer version of the same talk at Cluster 2016 in Taipei (more content for free!).

SPCL barbequeue version 3 (beach edition)

The next iteration in our celebration of SPCL successes since January was completed successfully! This time (based on popular demand) with a beach component where students could swim, fight, and be bitten by interesting naval creatures.

We celebrated our successes at HPDC, ICS, HOTI, and SC16!


Even with some action — boats speeding by rather closely ;-) .

Later, we moved to a barbequeue place a bit up a hill to get some real meat :-) .


We first had to conquer the place — but eventually we succeeded (maybe seond somebody earlier next time to occupy it and start a fire).


We had 7kg of Swiss cow this time!


And a much more professional fireplace!


Of course, some studying was also involved in the woods — wouldn’t be SPCL otherwise.


Including the weirdest (e.g., “hanging”) competitions.


We were around 20 people and consumed (listed here for the next planning iteration):

  • 6 x 1.5l water – 1l left at the end
  • 18×0.33l beer, 6×0.5l beer – all gone (much already at the beach)
  • 7l wine
  • 2l vodka
  • 7kg cow meat (4.5kg steaks, 2kg cevapcici, 0.5kg sausage)
  • 2 large Turkish-style breads (too quickly gone)
  • 1 quiche (too quickly gone)
  • 12 American cookies + 16 scones (both home-made)
  • 3/4 large watermelon
  • 0.5kg dates
  • 2kg grapes, 3kg peaches, 5 cucumbers,
  • 0.5kg grill pepper, 1kg mushrooms

SPCL activities at SC15

I am just back from SC15, definitely the most stressful week of my year. It was much worse than usual, I slept an average of 4.5 hours last week because I had a full schedule each day and had to prepare over night. Fortunately, I have my device measuring my sleep so I could understand why I felt so miserable :-) .

But it was absolutely amazing! I really love SC, the community, the excitement, the science at the conference. As usual, I learned a lot and SPCL communicated a lot. This year, I brought two students with me: Maciej and Tobias. Here is what we did at SC15:

  1. Sunday morning: International Workshop on Heterogeneous High-performance Reconfigurable Computing
    I co-organized this workshop together with a great team! My special thanks go to Michela and Jason! The workshop was wildly successful. The room was packed for the two keynotes by Doug Burger and Jason Cong. We can start an interesting discussion about the role of reconfigurable logic in HPC.

  2. Sunday afternoon: Tutorial on Insightful Automatic Performance Modeling

    Together with Alex Calotoiu (main presenter), Felix Wolf, and Martin Schulz. The tutorial discussed our previous work in automatic performance modeling and was well attended (~30)! I’d like to change some things but we’ll see if I can be convincing enough for my co-presenters.

  3. Monday: Full-day tutorial on Advanced MPI Programming

    Was as usual very well attended (~50) and a lot of fun to teach! I had to sneak out in the morning to speak at the panel “Research Panel: A Best Practices Guide to (HPC) Research” which was also a lot of fun (especially with Bart Miller).

    If you couldn’t make it then I’d suggest the book on the same topic (has very similar, actually slightly more, content).

  4. Tobias prepared his poster for the ACM student research competition
    He even made it into the finals and presented his work to the jury!

  5. SIGHPC Annual Meeting

    As an elected officer, I attended the SIGHPC BoF at SC15. Many exciting news, especially Intel’s fellowship program!

  6. Graph500 BoF

    As each year, released the Green Graph500 list. My slides.

  7. BoF Performance Reproducibility in HPC – Challenges and State-of-the-Art

    I presented my disruptive view at this BoF. Basically saying that we may want to give up and care about interpretability first! Similar in vein to my talk later in the week.

  8. Tobias presented the STELLA paper
  9. Georgios presented the diameter-two topologies paper

    A collaboration with IBM Research Zurich. Here’s the paper.

  10. Maciej received the George Michael HPC fellowship

    During the SC15 awards ceremony. Well done Mac!

  11. I presented our paper “Scientific Benchmarking of Parallel Computing Systems

    The room was nicely filled. The talk was rather provocative but I put cuddly vegetables on the slides. Thus, must be fine ;-) . Here are slides and paper!



Finally done! I arrived home and accepted the Latsis prize today. Now ready to get a lot of sleep …

2nd SPCL Barbequeue

Continuing our lab tradition that actually started in 2009 (with two people), we celebrated our scientific achievements with a party (now with 20 people). We had a lot to celebrate and even more that I cannot mention here yet (both will be announced by ETH very soon!).

We started at 4pm even though most people arrived around 5pm (partially due to some confusion about the location) and the hard core partied until 12:45am when we nearly ran out of firewood.

Some (rough) consumption statistics:
- ~10l wine
- ~27 bottles of beer
- ~2.5l various hard liquors (too much!)
- 16 beef patties (1.6kg), 8 burger buns
- home-marinated chicken (1kg)
- Bauern sausage (2kg)
- various other (Polish etc.) sausages (~1.5kg)
- 2 full-plate quiches (should have had three, were gone very fast)
- again, low consumption of non-alcoholic beverages (4l water, 2l juice)
- ~2kg vegetables (cucumber, pepper, …)
- 1kg bread
- 45 home-made american-style cookies (chocolate chip, pumpkin, raisin)
- various snacks (peanuts, chips, …)


Two preparing firewood and one watching (no comment!)


Took a while to get the fire going because of the really wet wood but then it was unstoppable!


lots of food and drinks (I don’t have a good picture of the big pile of food unfortunately)


Even special wintage wines from 1993 from Moldovia.


Starting the special BBQ setup after making enough ember.


Nice chats, nice forest (Switzerland rocks)


When shopping, we couldn’t not buy the Swiss Eidgenoss beer “Ein Schluck Heimat” :-) .


The grill looked 10x more professional than last time (see some exponential growth here).


It got dark a bit early, well, it’s late fall. BUT the weather was very nice and even though it was around 10 C, it was never cold due to the fire (so we can do this pretty late/early in the year).


The fire went strong …


Th Eidgenoss beer was finished first (it was actually pretty good) :-) .


The fire went very strong until the bitter end of the wood, we were nearly running out at 12:45am (nearly 8 hours after the start). We decided to leave some wood for the next people :-)

Microsoft Store – the worst shopping experience I can remember

You would think that a company like Microsoft has their online retailing somewhat under control. My first (and probably last) attempt to order something there failed miserably. Here’s the story:

I needed a new laptop for teaching, not too pricey, touchscreen, convertible. The Acer Aspire 11 seemed to fit that category. So I found a good deal on the Microsoft store for $449 through that link. It was Thursday August 13th and I needed it until August 24th — great, shipping in 3-7 business days, that works!

I added it to the cart, created a new account, verified it, works! Then I proceeded to checkout and after entering my credit card information the whole thing crashed. I only got a blank page and nothing else. Well, ok, close the store and retry logging in. Of course some cookie got stuck and when logging in, I got only the default error message “An error has occurred, ask support”. It is Thursday night.

Ok, well, there’s this chat feature and I tried it. Thirty minutes later, the person at the other end told me that the product I just purchased is not available. Well, weird, I sent her the link and she acknowledged that she sees the “add to cart” button but the product is not available. Huh, must be a bug? At the end, she could not push the old order through (something I do on a regular basis because I travel a lot). I remark that I had (have) an order number and everything but it seemed like this is not good for anything — I’m wondering what kind of database they have. It was also confusing that she constantly asked me what I ordered and who I am (I mean the order number should have these things attached … oh well).

Fine, the conclusion was to try another browser and re-order it myself. An hour of my life gone … I tried Firefox (was Chrome before) and indeed the store worked again (no cookies). I was able to order it. Now my bank declined the order due to some fraud alert. Fine, I called the bank and pushed the order though, the bank acknowledged (via email, as usual) the full charge and Microsoft sent an order confirmation saying “it may take as long as 4-6 hours for us to process it.”. Phew, done!

Ok, great … now it’s Friday and I have not gotten any shipping confirmation from Microsoft. Weird … 4-6 hours turned into 48 hours. I call the support (chat doesn’t seem to work to inquire about orders). The support line is overly complex and annoying trying to verify my account (why!? I have an order number, what role does the account play?). It takes minutes for them to send a challenge/response email to my self-made email address (as if this is any verification …). Well, I wait patiently on the line, this is my first call. So they tell me again that the product I ordered does not exist. But hey, I have an order confirmation!!!??? Then they blame the bank, I tell them to charge the bank right now again to check. They can’t do it, not sure why. Apparently, it needs to be “escalated”. They take my number and I’ll hear within 24 hours. Fine.

Well, I guess they weren’t able to call a German number, so I didn’t hear anything for 48 hours. Just nothing, no email, nothing. It nearly seems like they silently hope I forgot about the order (and the bank charge). It’s Tuesday the 18th now, getting tight. I call them again. They tell me it was escalated … well, yeah, I know this since I just gave her the case number *hmpf*. Each of these calls takes 30 minutes at least (partly due to the silly account verification even though I have an order number AND a case ID). Well, fine, no news, I need to wait for the “escalation team” which apparently cannot be reached and only operates by interrupting me. I’m a busy person and this is a silly concept, but fine, wait again.

Next day, nothing happened. I call again. AGAIN they tell me that the product I ordered does not even exist anymore. Well, I spell the link above into the phone and the other side is surprised and confused. Then, they are quick to tell me that there was also a problem with my bank but apparently they don’t see that it was resolved (must really be a great database). I gave up, no I just want to cancel the order. BUT they CANNOT cancel it. I now have to rely on their system to drop the order after a while (which it may or may not do, it’s not clear if it’ll wake up in the future and suddenly charge my card and send this laptop). This is a truly horrible shopping system. So fine, I’ll rely on their word, after all, they boast with free returns. But this system appears as extremely unprofessional. Microsoft should be able to do better. THIS is not the way to do business.

I spent a total of four and one half hours on the phone and in chats, all for nothing. I’m not going to compute what my salary was … definitely more than the laptop is worth.

Then I order the same thing on Amazon, well, within minutes I have order confirmation, charge, everything is on its way. However, due to the great Microsoft delay, I had to pay $15 extra for expedited shipping. Thank you Microsoft, this is wonderful!

And the saga continues: This morning, I received an email regarding my case ID. They DID NOT GET that I cancelled this order. Well, why should they, it cannot be cancelled after all. Wow, this is getting truly crazy and very unprofessional. I cannot recommend business with the Microsoft store. Fortunately, I know many higher-up Microsoft employees, I’ll mention this next time I’m in Redmond. Sadly, this is how one creates a bad reputation. I hope this documentation helps to improve the process!

Update (15/8/20): It is getting better — I sent them a link to this description and the answer is: “However we do apologize for the inconvenience that the computer you are requesting is now out of stock and you will not get this PC at the sale price.” – Wow, they’re good at making snarky apologies that don’t sound apologetic at all. There is of course no word about cancelling my order or anything (may still be “impossible”). The item is also STILL on the store webpage and I can still add it to my cart. Yesterday, I thought it couldn’t get worse but they don’t stop to surprise me!

Update (15/8/22): Microsoft, please stop sending me emails. I now received three (!!) more emails, two of them with identical content (see above). I guess it’s not enough to make the snarky comment once. The whole support system now looks to me like an AI/ML algorithm gone wild. I will not reply because I fear it’ll trigger more frustration!

Update (15/8/23): This is no joke, I received another (fourth) email about this. The exact same content as two of the emails before … Microsoft is not missing any chance for snarky comments “… you will not get this PC at the sale price.”. Yes, remind me that I should feel ripped off every day now … please stop!

Update (15/8/25): It is getting funny now. I received another email. Now it is essentially empty and only contains the default text which seems to ask me to call them. But I am not going to do this … well, each call costs me 30 minutes. I also already canceled my order. Wow, this system is incredibly broken, unbelievable. I am typing this post on the other laptop already …

The event for HPC Networking — Hot Interconnects 2015 — coming up soon!

IEEE Hot Interconnects 2015 (aka. HOTI’15) is around the corner. Early registration ends on July 31st! As usual, in Silicon Valley, where the heart of the interconnects industry beats lively. Following it’s 23 years tradition of revealing new interconnect technologies, HOTI’15 will not fall short. New HPC and datacenter network technologies such as Intel’s OmniPath and Bull’s Exascale Interconnect (BXI) will be presented at this year’s conference. Followed by a heated panel where members of industry and laboratories fight for their favorite technologies. Will Ethernet and InfiniBand clash with Intel’s and Bull’s new technologies? Will InfiniBand continue to shine? The future is unclear but the discussions will add to our understanding.

This year’s location is the historic Oracle Agnews Campus, Santa Clara, California. Hot Interconnects (HotI) is the premier international forum for researchers and developers of state- of-the-art hardware and software architectures and implementations for interconnection networks of all scales, ranging from multi-core on-chip interconnects to those within systems, clusters, data centers, and clouds. This yearly conference is attended by leaders in industry and academia, creating a wealth of opportunities to interact with individuals at the forefront of this field.

In addition to novel network technologies and hot discussions, this year’s Hot Interconnects features keynotes from Oracle’s Vice President of Hardware Development Rick Heatherington, and David Meyer, the CTO and Chief Scientist of Brocade Communications. There will be a great lineup of exciting talks, e.g., Facebook will discuss their efforts in interconnects and VMWare will talk about Network Function Virtualization (NFV).

There will be four technical paper sessions covering the cutting edge in interconnect research and development on cross-cutting issues spanning computer systems, networking technologies, and communication protocols for high-performance interconnection
networks. This conference is directed particularly at new and exciting technology and product innovations in these areas.

In addition, there will be four information-loaded tutorials on Big Data processing; advanced flow- and congestion control; ONOS, an open source SDN network operating system; and software-defined wide-area networking. These will provide in-depth coverage of latest industry developments and standards. Use them to get up to speed in the quickly changing networking field!

All this makes IEEE Hot Interconnects the hub for converging datacenter, HPC, and Big Data networking. An event that cannot be missed! The early registration closes in less than two weeks! See you in Santa Clara in August!

Visit http://www.hoti.org for details!