SuperComputing in Seattle: A PhD Student’s Perspective

Posted: December 14, 2011 at 3:09 pm

High Performance Computing (HPC) is the name of the game, and the SuperComputing conference is the premier event where practitioners, vendors and enthusiasts come together to live and breathe all things HPC-related. This year, SC11 was held at the Washington State Convention Center in Seattle, Washington, to the tune of more than 14,000 attendees. Over the course a week, the conference showcased the latest and greatest lineups from 350 exhibitors and delivered a plethora of technical presentations. Though this was my first international conference, I can confidently say that the offerings at SC11 would have impressed even the most seasoned conference-goers among you—it really was super!

As a 2nd-year Engineering PhD candidate, I hope to give you my impressions of SC11 from the point of view of a graduate engineer with a keen interest in HPC (and being no expert in HPC by any stretch!). I attended the conference with my colleague and fellow PhD buddy, Paul Mignone, and we presented a number of posters of our PhD research. I will preface this article by stating that all opinions expressed here are purely my own, so feel free to leave your thoughts and feedback in the comments section.

Overall Impressions

Seattle is a wintry city at this time of year—not unlike the colder, wetter Melbourne months. Temperatures averaged between 2°C to 10°C, and the threat of rain was constant for most of the week. But don’t let the bleak-sounding November climate discourage you: there was plenty to see and do both during the conference and in the down-times in and around city. With a population of just over 600,000 Seattle is the biggest city in the Northwest. Along with these residents, a few global and household names also call (or once called) this city home: Boeing, Microsoft, Cray, Starbucks, Costco and Amazon, to name a few. With a renowned culinary scene, a decent restaurant is always just minutes walk away in Downtown Seattle, or, if you’re willing to venture out a little, aplenty in the International District. Seattle is no stranger to big conferences either: their twin convention centres, the Washington State Convention Center and the adjacent The Convention Center (confusing name, I know), play host to conferences of all sizes throughout the year.

The thrust of SC11 was billed as “data intensive science”, meaning the generation, storage and analysis of very large data sets in a wide variety of scientific domains. This was good news for us engineers, since we don’t get all the “computery stuff”. We do, however, share the goal of being able to crunch gigabytes of data as quickly as possible so we can go have a beer. SC11 did not disappoint in this regard (data crunching and beer—they put on a great closing party at the Space Needle). There were tons of workshops and tutorials that focused on how to make best make use of current and emerging technologies to accelerate real-world applications (be they in engineering, bioinformatics or finance).

And then it hits you. Every single person who walks through those doors is interested in one word. Faster . It doesn’t matter what label you put on it: supercomputing, by definition, is all about going faster, bigger, more! Today, this means going parallel: computers aren’t getting faster because they do a single task more quickly, but because they do more at the same time. As expected, parallel computing continued to be the big-ticket item of SC11, in all its incarnations, from hardware to software, and everything in between.

No SC conference would be complete without the announcement of the November-iteration of the TOP500, a biannually published list ranking the 500 fastest supercomputers in the world. Of note was the fact that it was the first time since the inception of the TOP500 in 1993 that the top 10 supercomputers remained completely unchanged since the previous listing back in June 2011.

The Exhibit

View of WSCC and above-street walkway

The exhibit hall spanned two buildings, from the Washington State Convention Center (WSCC), above the street via the 4th floor walkway into the adjacent building, which was confusingly named “The” Convention Center (TCC). A total of 350 exhibitor booths called this home for three full days. All the big name players were present: Intel, Cray, IBM, SGI, HP, NVIDIA, AMD, as well as cloud and other service providers like Amazon and Facebook. Cray had their shiny new XK6 hybrid CPU-GPU cabinets on display, while Intel wooed passersby with barrel-rolling fun in an interactive 360° immersive flight simulator. Many of the larger booths sported a mini presentation stage, where technical talks throughout the day showed off the best the company’s tech had to offer.

Oak Ridge National Lab booth

A flamboyant presenter at the NVIDIA GPU Technology Theatre

The national labs put in a strong showing: reps from Los Alamos, Oak Ridge, Argonne, Sandia, etc. were all too happy to have a chat and slip me a gloss information packet. Of the 132 listed Research Exhibitors, I counted 40 Universities—it was especially encouraging to see academia so well-represented.

The Aussie VPAC/VLSCI booth

The VPAC/VLSCI booth did the Aussies proud, boasting the highly coveted ‘Kangaroos next 25 km’ souvenir signs and matching kangaroo bottle openers—worthy trinkets for all lucky enough to venture our way. The sheer number of things going on at the same time on the exhibition floor was simply staggering. With the lure of more free swag than you could shake your baggage limit at, it was imperative that you plotted your loot-gathering expeditions efficiently between the lengthier chit-chats with booth folk.

Technical Program

The technical program showcased an incredible breath (and depth) of topics. Particularly of note to the engineering community were the Masterworks lectures. These featured talks by world-leading researchers who shed light on some of the most innovative uses of HPC in solving the most computationally challenging problems in existence, from hypersonic flight, to fusion energy, to financial analysis and beyond.

As I mentioned already, the workshops and tutorials provided a great opportunity for researchers and practitioners to quickly come up to speed with how HPC could be applied in their their own fields. Between Paul and I, we attended 4 tutorials and 2 workshops. Some of the topics included:

  • Building Infrastructure Clouds for Scientific Computing
  • Advanced GPU Computing
  • Parallel Computing with Co-Array Fortran
  • Python for High Performance and Scientific Computing

These were all of a very high standard, and for the most part the hands-on interactive components were extremely valuable (excepting the usual issues associated with overloading of server resources, etc). However, the tutorials ran over three days as either as a whole day (8:30–5:00) or half-day sessions, which were the same days scheduled for the workshops. This meant that anyone who attended a full-day tutorial (which were paid for in advance) pretty much missed out on anything else that was on that day. With at least half a dozen other concurrent panel discussions and presentations going on, I very quickly realised I wouldn’t even come close to seeing all the sessions that I had marked as “interesting.”

The Buzz of SC11

At this juncture I feel it’s appropriate to share with you a few of the buzz words that were all the rage this year at SC11. There would have been others, depending on you’re specific area of expertise, but I feel these were the main ones:

  • Cloud – quite appropriate for Seattle at this time year.
  • GPU (or heterogeneous) computing
  • Exascale – this was the ‘big one’.

Apart from bringing the rains, clouds featured at the conference in another big way. I am, of course, referring to cloud computing. We’re already familiar with cloud storage. These days your emails, contact lists, bookmarks, music and even entire hard drivers can be uploaded into the cloud. But what can this technology do for you as a researcher? Well, just as you never really worry about how or where your messages and files are stored once you’ve beamed them up, the same concept applies to compute clouds: you get virtual machines that magically appear out of the cloud, ready to do your bidding. Just as with your friendly local cluster account, you log in via the internet and submit and run jobs. The only difference is, you don’t care (or even know) where the physical machine is running (that’s the ‘cloud’ part). Today it could be in continental US, tomorrow it might be Singapore or Ireland. Once you’re finishes, the virtual machine shuts down, and disappears into the cloud. Zero maintenance associated with running a physical cluster. And the best part? It’s cheap to get started. Really cheap. Cloud providers charge just a few cents per hour for the most basic set-ups.

Most software vendors at SC11 had, or were in the process of releasing, some cloud-enabled functionality within their application. Be it the task of cloud-based document management that could be accessed from anywhere in the world, or the ability to run entire simulations using the vast resources of Microsoft’s Azure cloud, everyone was excited about the cloud. In a word, the cloud was democratising HPC.

Infrastructure cloud computing tutorial

Second buzz word? GPU. It stands for the Graphics Processing Unit, and is the awe and pride of computer gamers the world over. Owing to their phenomenal power for parallel processing (a typical GPU has cores that number in the hundreds, compared to the usual four in a CPU), GPUs have found traction in all areas of scientific research in recent years. The high-end NVIDIA Fermi GPUs are found in three of the top five supercomputers in the TOP500 (though, interestingly, not the current No. 1-ranked supercomputer, the Fujitsu K Computer). It is then not surprising that Jen-Hsun Huang, the co-founder and CEO of NVIDIA, was invited to deliver the keynote speech of SC11 to a packed-out Sheraton Ballroom. I’ll digress for the moment and cover the keynote, before returning to the third and final buzz word I mentioned above.

Keynote

For anyone who has watched Jen-Hsun Huang speak, this keynote was delivered in typical Jen-Hsun style: he has an incredibly engaging ability to lead his audience down a path dotted with captivating and light-hearted anecdotes, but never fails to drive a clear take-home message. The underdog story of how NVIDIA evolved from humble 3D graphics chip manufacturer to the world’s leading supplier of GPU products was hard to miss! Nor were the obvious plugs for his buddies over at Electronic Arts and Ubisoft, when he played minutes-long trailers for Battlefield 3 and Assassin’s Creed: Revelations. You would be forgiven if you thought for a moment that instead of a keynote at SuperComputing you had been teleported into a game studio press event.

Jen-Hsun Huang delivers the SC11 keynote

The two big product announcements coming out of the keynote for NVIDIA were OpenACC, a new directives-based compiler standard for heterogeneous computing; and NVIDIA Maximus, a realtime, coupled visualisation–simulation solution targetted at workstation platforms in the professional market. You can catch the hour-long keynote here.

Exascale

Jen-Hsun’s keynote, entitled ‘Exascale: An Innovator’s Dilemma‘, dovetails nicely into the third and final buzz word: exascale. This is a big number. In case you’re thinking of looking up just how big, let me save you the trouble: 1,000,000,000,000,000,000. Or 1018. And when you’re talking computing power (FLOPS), that’s a lot of calculations every second. In fact, it’s ten times more processing power than all the supercomputers in the current TOP500 combined. That’s scary. Am I the only one thinking SkyNet?

Regardless, the US Department of Energy has set 2018 as the target for reaching that milestone: see here. The target power consumption? 20 MW. Jen-Hsun Huang argued that with the current outlook, NVIDIA GPUs can get us there by 2022 (with a very good chance they can do it even sooner). Whether you agree that GPUs—NVIDIA GPUs or otherwise—are the key to exascale depends on which side of the GPUs-can-solve-all-the-world’s-problems-fence you’re on. I personally think we are only in the dawn of the GPU age when it comes to scientific computing and with the computer games market not going away any time soon, neither will the impetus for continued GPU advancements. Therefore any way you cut it, it’s a win for science.

Road to Exascale: as explained by Jen-Hsun Huang

While NVIDIA enjoys a strong market position with its present GPU offerings, it looks set to face stiffer competition in the HPC market in 2012 from Intel’s 50+ cores Knights Corner chip, which also made headlines at SC11. GPUs or not, energy-efficiency is the clear challenge for chip manufacturers going forward. The current performance trendline for the No. 1 machine in the TOP500 (slide 6) crosses the exascale in 2019. Therefore it’s not beyond reason that someone will put together a giant machine capable of hitting that mark by 2018. However, to do so within the power envelope set by the DoE will be the non-trivial technological challenge. Keeping to this power-efficiency theme, the Barcelona Supercomputing Center announced a prototype hybrid architecture featuring an NVIDIA GPU coupled to an ARM CPU on the same board. Named the Mont-Blanc Project, the all-European consortium behind it hopes to take the power-savvy road to exascale glory, by leveraging the low-power footprint of embedded microprocessor technology.

Our Posters

Up to this point you may be wondering how two PhD students managed to swindle their way into an international conference like SC11. Allow me to clear this up: despite what looks like all the fun and games, we did in fact have real work to present. Paul and I presented one poster each at the Early Adopters PhD Workshop (EAPW) that is organised by Dr Wojtek Goscinski (Monash eResearch Centre and coordinator of MASSIVE—the GPU cluster at Monash and the Australian Synchotron). This workshop, run for the third time at SC11, provides an opportunity for students in the early stages of their PhDs to present a research poster to a contingent of expert reviewers, as well as conference attendees generally, for feedback. There were 35 posters this year, from a diverse range of fields: link.

Additionally, we were delighted to have been successful in submitting an electronic poster to the main Poster Session of the Technical Program. In total, there were 70 posters in the Program at SC11, eight of which were “ePosters” displayed on 50-inch LCD screens (see below). This novel presentation mode allowed us to show off visualisations that would not have been possible using traditional paper posters. The range of animations and interactive content that can be delivered using this format is almost limitless, and makes the communication of complex ideas infinitely more engaging. In all, I think this is a fantastic innovation, which I hope to see more widely adopted (note to conference organisers!).

Paul and me presenting our electronic poster

Conclusion

If you’ve reached this point of the article, well done! This is all I have for you. I hope it has given you a flavour of SC11, from an engineer’s and PhD student’s point of view. Though it was a bit overwhelming at first, I cannot speak more highly of a conference on the scale of SC11. You fellow PhD students who are yet to enjoy a similar experience have a lot to look forward to. Thank you for reading, and I sincerely hope I’ve tempted you into considering joining us at SC12 in Salt Lake City!

Acknowledgements

I would like to thank my primary funding body, the Defence Materials Technology Centre (DMTC), for its ongoing financial and technical support. Also, I thank the Research Training–Conference Assistance Scheme (RT–CAS), and the Clive Pratt Scholarship Fund for their financial support towards this conference.


Michael Wang
m.wang@unimelb.edu.au

December, 2011

Comments

  1. Bernard Meade says:

    Great article Michael, and an excellent talk given at the HPC forum. I’ve always enjoyed SC when I’ve attended and this sort of article really captures the spirit of the conference, hopefully engaging the interest of a few more people who might otherwise not know about it.

  2. Michael Wang says:

    Thanks, Bernard. I’m glad the enthusiasm came through. It was so great being able to see first hand what the rest of the world are doing in this area–and what’s really encouraging is that, even being so far from the centre of all the “action”, we @UniMelb are clearly punching above our weight. Just not many people know about it. So definitely good to share the positives. Anything to help get the word out there!

Speak Your Mind

*