My Next Chapter in Distributed Computing: Joining Sylabs to Containerize HPC and Deep Learning

HPC in the Cloud?

Back in 2015, my long-time friend and scientific collaborator James (Jim) Freemantle suggested I give a talk to his local association of remote-sensing professionals. In hindsight, and much more importantly it would turn out for me, was his suggestion to juxtapose Cloud computing and High Performance Computing (HPC) in this talk. Although it’s still available via the Ontario Association of Remote Sensing (OARS) website here, the abstract for my talk High Performance Computing in the Cloud? read:

High Performance Computing (HPC) in the Cloud is viable in numerous applications. Common to all successful applications for cloud-based HPC is the ability to embrace latency. Early successes were achieved with embarrassingly parallel HPC applications involving minimal amounts of data – in other words, there was little or no latency to be hidden. More recently the HPC-cloud community has become increasingly adept in its ability to ‘hide’ latency and, in the process, support increasingly more sophisticated HPC applications in public and private clouds. In this presentation, real-world applications, deemed relevant to remote sensing, will illustrate aspects of these sophistications for hiding latency in accounting for large volumes of data, the need to pass messages between simultaneously executing components of distributed-memory parallel applications, as well as (processing) workflows/pipelines. Finally, the impact of containerizing HPC for the cloud will be considered through the relatively recent creation of the Cloud Native Computing Foundation.

I delivered the talk in November 2015 at Toronto’s Ryerson University to a small, but engaged group, and made the slides available via the OARS website and Slideshare.

As you can read for yourself from the abstract and slides, or hear in the Univa webinar that followed in February 2016, I placed a lot of emphasis on latency in juxtaposing the cloud and HPC; from the perspective of HPC workloads, an emphasis that remains justifiable today. Much later however, in working hands-on on various cloud projects for Univa, I’d appreciate the challenges and opportunities that data introduces; but more on that (data) another time …

Cloud-Native HPC?

Also in hindsight, I am pleased to see that I made this relatively early connection (for me, anyway) with the ‘modern’ notion of what it means to be cloud native. Unfortunately, this is a phrase bandied about with reckless abandon at times – usage that causes the phrase to become devoid of meaning. So, in part, in my OARS talk and the Univa webinar, I related cloud native as understood by the Cloud Native Computing Foundation:

Cloud native computing uses an open source software stack to deploy applications as microservices, packaging each part into its own container, and dynamically orchestrating those containers to optimize resource utilization.

If you even possess just a working knowledge of HPC at a high level, you’ll immediately appreciate that there’s the possibility for more than a little tension that’s likely to surface in realizing any vision of ‘cloud-native HPC’. Why? HPC applications have not traditionally been architected with microservices in mind; in fact, they employ the polar opposite of microservices in implementation. Therefore, the notion of taking existing HPC applications and simply executing them within a Docker container can present some challenges and opportunities – even though numerous examples of successfully containerized HPC applications do exist (see, for example, the impressive array of case studies over at UberCloud).

In some respects, when it comes to containerizing HPC applications, this is just the tip of the iceberg. In following up on the Univa webinar with a Q&A blog on HPC in the Cloud in February 2016, I quoted Univa CTO Fritz Ferstl in regard to a question on checkpointing Docker containers:

The mainstream of the container ecosystem views them as ephemeral – i.e., you can just kill them, restart them (whether on the same node or elsewhere), and then they somehow re-establish ‘service’ (i.e., what they are supposed to do … even though this isn’t an intrinsic capability of a Docker container).

Whereas ephemeral resonates soundly with microservices-based applications, this is hardly a ‘good fit’ for HPC applications. And because they share numerous characteristics with traditional HPC applications, emerging applications and workloads in AI, Deep Learning and Machine Learning suffer a similar fate: They aren’t always a good fit for traditional containers along the implementation lines of Docker. From nvidia-docker to the relatively recent and impressive integration between Univa Grid Engine and Docker, it’s blatantly evident that there are significant technical gymnastics required to leverage GPUs for applications that one seeks to execute within a Docker container. For years now for traditional HPC applications and workflows, and more recently for Deep Learning use cases, there is an implied requirement to tap into GPUs as computational resources.

A Singular Fit

For these and other reasons, Singularity has been developed, as a ‘vehicle’ for containerization that is simply a better fit for HPC and Deep Learning applications and their corresponding workflows. Because I have very recently joined the team at Sylabs, Inc. as a technical writer, you can expect to hear a whole lot more from me on containerization via Singularity – here, or even more frequently, over at the Sylabs blog and in Lab Notes.

Given that my acknowledged bias towards Distributed computing includes a significant dose of Cloud computing, I’m sure you can appreciate that it’s with so much more than a casual degree of enthusiasm that I regard my new opportunity with Sylabs – a startup that is literally defining how GOV/EDU as well as enterprise customers can sensibly and easily acquire the benefits of containerizing their applications and workflows on everything from isolated PCs (laptops and desktops) to servers, VMs and/or instances that exist in their datacenters and/or clouds.

From my ‘old’ friend/collaborator Jim to the team at Sylabs that I’ve just joined, and to everyone in between, it’s hard not to feel a sense of gratitude at this juncture. With HPC’s premiere event less than a month away in Dallas, I look forward to reconnecting with ‘my peeps’ at SC18, and ensuring they are aware of the exciting prospects Singularity brings to their organizations.

Towards Tsunami Informatics: Applying Machine Learning to Data Extracted from Twitter

2018 Sulawesi Earthquake & Tsunami

Even in 2018, our ability to provide accurate tsunami advisories and warnings is exceedingly challenged.

In best-case scenarios, advisories and warnings afford inhabitants of low-lying coastal areas minutes or (hopefully) longer to react.

In best-case scenarios, advisories and warnings are based upon in situ measurements via tsunameters – as ocean-bottom changes in seawater pressure serve as reliable precursors for impending tsunami arrival. (By way of analogy, tsunameters ‘see’ tsunamis as do radars ‘see’ precipitation. Based on ‘sight’ then, both offer a reasonable ability to ‘nowcast’.)

In typical scenarios, however, advisories and warnings can communicate mixed messages. In the case of the recent Sulawesi earthquake and tsunami for example, a nearby alert (for the Makassar Strait) was retracted after some 30 minutes, even though Palu, Indonesia experienced a ‘localized’ tsunami that resulted in significant losses – with current estimates placing the number of fatalities at more than 1200 people.

With ultimate regret stemming from significant loss of human life, the recent case for the residents of Palu is particularly painful, as alerting was not informed by tsunameter measurements owing to an ongoing dispute – an unresolved dispute that rendered the deployment of an array of tsunameters incomplete and inoperable. A dispute that, if resolved, could’ve provided this low-lying coastal area with accurate and potentially life-saving alerts.

Lessons from Past Events

It’s been only 5,025 days since the last tsunami affected Indonesia – the also devastating Boxing Day 2004 event in the Indian Ocean. All things considered, it’s truly wonderful that a strategic effort to deploy a network of tsunameters in this part the planet was in place; of course, it’s well beyond tragic that execution of the project was significantly hampered, and that almost 14 years later, inhabitants of this otherwise idyllic setting are left to suffer loss of such epic proportions.

I’m a huge proponent of tsunameters as last-resort, yet-accurate indicators for tsunami alerting. In their absence, the norm is for advisories and warnings that may deliver accurate alerts – “may” being the operative word here, as it often the case that alerts are issued only to be retracted at some future time … as was the case again for the recent Sulawesi event. Obviously, tsunami centers that ‘cry wolf’, run the risk of not being taken seriously – seriously, perhaps, in the case when they have correctly predicted an event of some significance.

It’s not that those scientific teams of geographers, geologists, geophysicists, oceanographers and more are in any way lax in attempting to do their jobs; it’s truly that the matter of tsunami prediction is exceedingly difficult. For example, unless you caught the January 2006 issue of Scientific American as I happened to, you’d likely be unaware that 4,933 days ago an earthquake affected (essentially) the same region as the Boxing Day 2004 event; regarded as a three-month-later aftershock, this event of similar earthquake magnitude and tectonic setting did not result in a tsunami.

Writing in this January 2006 issue of Scientific American, Geist et al. compared the two Indian Ocean events side-by-side – using one of those diagrams that this magazine is lauded for. The similarities between the two events are compelling. The seemingly subtle differences, however, are much more than compelling – as the tsunami-producing earlier of the two events bears testimony.

As a student of theoretical, global geophysics, but not specifically oceanography, seismology, tectonophysics or the like, I was unaware of the ‘shocking differences’ between these two events. However, my interest was captivated instantaneously!

Towards Tsunami Informatics

Graph Analytics?

It would take, however, some 3,000 days for my captivated interest to be transformed into a scientific communication. On the heels of successfully developing a framework and platform for knowledge representation with long-time friend and collaborator Jim Freemantle and others, our initial idea was to apply graph analytics to data extracted from Twitter – thus acknowledging that Twitter has the potential to serve as a source of data that might be of value in the context of tsunami alerting.

In hindsight, it’s fortunate that Jim and I did not spend a lot of time on the graph-analytics approach. In fact, arguably the most-valuable outcome from the poster we presented at a computer-science conference in June 2014 (HPCS, Halifax, Nova Scotia), was Jim’s Perl script (see, e.g., Listing 1 of our subsequent unpublished paper, or Listing 1.1 of our soon-to-be published book chapter) that extracted keyword-specified data (e.g., “#earthquake”) from Twitter streams.

Machine Learning: Classification

About two years later, stemming from conversations at the March 2016 Rice University Oil & Gas Conference in Houston, our efforts began to emphasize Machine Learning over graph analytics. Driving for results to present at a May 2016 Big Data event at Prairie View A&M University (PVAMU, also in the Houston area), a textbook example (literally!) taken from the pages of an O’Reilly book on Learning Spark showed some promise in allowing Jim and I to classify tweets – with hammy tweets encapsulating something deemed geophysically interesting, whereas spammy ones not so much. ‘Not so much’ was determined through supervised learning – in other words, results reported were achieved after a manual classification of tweets for the purpose of training the Machine Learning models. The need for manual training, and absence of semantics struck the two of us as ‘lacking’ from the outset; more specifically, each tokenized word of each tweet was represented as a feature vector – stated differently, data and metadata (e.g., Twitter handles, URLs) were all represented with the same (lacking) degree of semantic expression. Based upon our experience with knowledge-representation frameworks, we immediately sought a semantically richer solution.

Machine Learning: Natural Language Processing

It wasn’t until after I’d made a presentation at GTC 2017 in Silicon Valley the following year that the idea of representing words as embedded vectors would register with me. Working with Jim, two unconventional choices were made – namely, GloVe over word2vec and PyTorch over TensorFlow. Whereas academic articles justified our choice of Stanford’s GloVe, the case for PyTorch was made on less-rigorous grounds – grounds expounded in my GTC presentation and our soon-to-be published book chapter.

Our uptake of GloVe and PyTorch addressed our scientific imperative, as results were obtained for the 2017 instantiation of the same HPCS conference where this idea of tsunami alerting (based upon data extracted from Twitter) was originally hatched. In employing Natural Language Processing (NLP), via embedded word vectors, Jim and I were able to quantitatively explore tweets as word-based time series based upon their co-occurrences – stated differently, this word-vector quantification is based upon ‘the company’ (usage associations) that words ‘keep’. By referencing the predigested corpora available from the GloVe project, we were able to explore “earthquake” and “tsunami” in terms of distances, analogies and various kinds of similarities (e.g., cosine similarity).

Event-Reanalysis Examples

Our NLP approach appeared promising enough that we closed out 2017 with a presentation of our findings to date during an interdisciplinary session on tsunami science at the Fall Meeting of the American Geophysical Union held in New Orleans. To emphasize the scientific applicability of our approach, Jim and I focused on reanalyzing two-pairs of events (see Slide 10 here). Like the pair identified years previously in the 2006 Scientific American article, the more-recent event pairs we chose included earthquake-only plus tsunamigenic events originating in close geographic proximity, with similar oceanic and tectonic settings.

The most-promising results we reported (see slides 11 and 12 here and below) involved those cosine similarities obtained for earthquake-only versus tsunamigenic events; evident via clustering, the approach appears able to discriminate between the two classes of events based upon data extracted from Twitter. Even in our own estimation however, the clustering is weakly discriminating at best, and we expect to apply more-advanced approaches for NLP to further separate classes of events.

Agile Sprints - Events - 2017 AGU Fall Meeting - Twitter Tsunami - December 8, 2017

Discussion

Ultimately, the ability to further validate and operationally deploy this alerting mechanism would require the data from Twitter be streamed and processed in real time – a challenge that some containerized implementation of Apache Spark would seem ideally suited to, for example. (Aspects of this Future Work are outlined in the final section of our HPCS 2017 book chapter.)

When it comes to tsunamis, alerting remains a challenge – especially in those parts of the planet under-serviced by networks of tsunameters … and even seismometers, tide gauges, etc. Thus prospects for enhancing the alerting capabilities remain valuable and warranted. Even though inherently fraught with subjectivity, data extracted from streamed Twitter data in real time appears to hold some promise for providing a data source that compliments the objective output from scientific instrumentation. Our approach, based upon Machine Learning via NLP, has demonstrated promising-enough early signs of success that ‘further research is required’. Given that this initiative has already benefited from useful discussions at conferences, suggestions are welcome, as it’s clear that even NLP has a lot more to offer beyond embedded word vectors.