My Next Chapter in Distributed Computing: Joining Sylabs to Containerize HPC and Deep Learning

HPC in the Cloud?

Back in 2015, my long-time friend and scientific collaborator James (Jim) Freemantle suggested I give a talk to his local association of remote-sensing professionals. In hindsight, and much more importantly it would turn out for me, was his suggestion to juxtapose Cloud computing and High Performance Computing (HPC) in this talk. Although it’s still available via the Ontario Association of Remote Sensing (OARS) website here, the abstract for my talk High Performance Computing in the Cloud? read:

High Performance Computing (HPC) in the Cloud is viable in numerous applications. Common to all successful applications for cloud-based HPC is the ability to embrace latency. Early successes were achieved with embarrassingly parallel HPC applications involving minimal amounts of data – in other words, there was little or no latency to be hidden. More recently the HPC-cloud community has become increasingly adept in its ability to ‘hide’ latency and, in the process, support increasingly more sophisticated HPC applications in public and private clouds. In this presentation, real-world applications, deemed relevant to remote sensing, will illustrate aspects of these sophistications for hiding latency in accounting for large volumes of data, the need to pass messages between simultaneously executing components of distributed-memory parallel applications, as well as (processing) workflows/pipelines. Finally, the impact of containerizing HPC for the cloud will be considered through the relatively recent creation of the Cloud Native Computing Foundation.

I delivered the talk in November 2015 at Toronto’s Ryerson University to a small, but engaged group, and made the slides available via the OARS website and Slideshare.

As you can read for yourself from the abstract and slides, or hear in the Univa webinar that followed in February 2016, I placed a lot of emphasis on latency in juxtaposing the cloud and HPC; from the perspective of HPC workloads, an emphasis that remains justifiable today. Much later however, in working hands-on on various cloud projects for Univa, I’d appreciate the challenges and opportunities that data introduces; but more on that (data) another time …

Cloud-Native HPC?

Also in hindsight, I am pleased to see that I made this relatively early connection (for me, anyway) with the ‘modern’ notion of what it means to be cloud native. Unfortunately, this is a phrase bandied about with reckless abandon at times – usage that causes the phrase to become devoid of meaning. So, in part, in my OARS talk and the Univa webinar, I related cloud native as understood by the Cloud Native Computing Foundation:

Cloud native computing uses an open source software stack to deploy applications as microservices, packaging each part into its own container, and dynamically orchestrating those containers to optimize resource utilization.

If you even possess just a working knowledge of HPC at a high level, you’ll immediately appreciate that there’s the possibility for more than a little tension that’s likely to surface in realizing any vision of ‘cloud-native HPC’. Why? HPC applications have not traditionally been architected with microservices in mind; in fact, they employ the polar opposite of microservices in implementation. Therefore, the notion of taking existing HPC applications and simply executing them within a Docker container can present some challenges and opportunities – even though numerous examples of successfully containerized HPC applications do exist (see, for example, the impressive array of case studies over at UberCloud).

In some respects, when it comes to containerizing HPC applications, this is just the tip of the iceberg. In following up on the Univa webinar with a Q&A blog on HPC in the Cloud in February 2016, I quoted Univa CTO Fritz Ferstl in regard to a question on checkpointing Docker containers:

The mainstream of the container ecosystem views them as ephemeral – i.e., you can just kill them, restart them (whether on the same node or elsewhere), and then they somehow re-establish ‘service’ (i.e., what they are supposed to do … even though this isn’t an intrinsic capability of a Docker container).

Whereas ephemeral resonates soundly with microservices-based applications, this is hardly a ‘good fit’ for HPC applications. And because they share numerous characteristics with traditional HPC applications, emerging applications and workloads in AI, Deep Learning and Machine Learning suffer a similar fate: They aren’t always a good fit for traditional containers along the implementation lines of Docker. From nvidia-docker to the relatively recent and impressive integration between Univa Grid Engine and Docker, it’s blatantly evident that there are significant technical gymnastics required to leverage GPUs for applications that one seeks to execute within a Docker container. For years now for traditional HPC applications and workflows, and more recently for Deep Learning use cases, there is an implied requirement to tap into GPUs as computational resources.

A Singular Fit

For these and other reasons, Singularity has been developed, as a ‘vehicle’ for containerization that is simply a better fit for HPC and Deep Learning applications and their corresponding workflows. Because I have very recently joined the team at Sylabs, Inc. as a technical writer, you can expect to hear a whole lot more from me on containerization via Singularity – here, or even more frequently, over at the Sylabs blog and in Lab Notes.

Given that my acknowledged bias towards Distributed computing includes a significant dose of Cloud computing, I’m sure you can appreciate that it’s with so much more than a casual degree of enthusiasm that I regard my new opportunity with Sylabs – a startup that is literally defining how GOV/EDU as well as enterprise customers can sensibly and easily acquire the benefits of containerizing their applications and workflows on everything from isolated PCs (laptops and desktops) to servers, VMs and/or instances that exist in their datacenters and/or clouds.

From my ‘old’ friend/collaborator Jim to the team at Sylabs that I’ve just joined, and to everyone in between, it’s hard not to feel a sense of gratitude at this juncture. With HPC’s premiere event less than a month away in Dallas, I look forward to reconnecting with ‘my peeps’ at SC18, and ensuring they are aware of the exciting prospects Singularity brings to their organizations.

PyTorch Then & Now: A Highly Viable Framework for Deep Learning

Why PyTorch Then?

In preparing for a GTC 2017 presentation, I was driven to emphasize CUDA-enabled GPUs as the platform upon which I’d run my Machine Learning applications. Although I’d already had some encouraging experience with Apache Spark’s MLlib in a classification problem, ‘porting’ from in-memory computations based upon use of CPUs to GPUs was and remains ‘exploratory’ – with, perhaps, the notable exception of a cloud-based offering from Databricks themselves. Instead, in ramping up for this Silicon Valley event, I approached this ‘opportunity’ with an open mind and began my GPU-centric effort by starting at an NVIDIA page for developers. As I wrote post-event in August 2017:

Frankly, the outcome surprised me: As a consequence of my GTC-motivated reframing, I ‘discovered’ Natural Language Processing (NLP) – broadly speaking, the use of human languages by a computer. Moreover, by reviewing the breadth and depth of possibilities for actually doing some NLP on my Twitter data, I subsequently ‘discovered’ PyTorch – a Python-based framework for Deep Learning that can readily exploit GPUs. It’s important to note that PyTorch is not the only choice available for engaging in NLP on GPUs, and it certainly isn’t the most-obvious choice. As I allude to in my GTC presentation, however, I was rapidly drawn to PyTorch.

Despite that most-obvious choice (I expect) of TensorFlow, I selected PyTorch for reasons that included the following:

Not bad for version 0.1 of a framework, I’d say! In fact, by the time I was responding to referee’s feedback in revising a book chapter (please see “Refactoring Earthquake-Tsunami Causality with Big Data Analytics” under NLP in my Data Science Portfolio), PyTorch was revised to version 0.2.0. This was a very welcome revision in the context of this chapter revision, however, as it included a built-in method for performing cosine similarities (“cosine_similarity”) – the key discriminator for quantitatively assessing the semantic similarity between two word vectors.

Perhaps my enthusiasm for PyTorch isn’t all that surprising, as I do fit into one of their identified user profiles:

PyTorch has gotten its biggest adoption from researchers, and it’s gotten about a moderate response from data scientists. As we expected, we did not get any adoption from product builders because PyTorch models are not easy to ship into mobile, for example. We also have people who we did not expect to come on board, like folks from OpenAI and several universities.

Towards PyTorch 1.0

In this same August 2017 O’Reilly podcast (from which I extracted the above quote on user profiles), Facebook’s Soumith Chintala stated:

Internally at Facebook, we have a unified strategy. We say PyTorch is used for all of research and Caffe 2 is used for all of production. This makes it easier for us to separate out which team does what and which tools do what. What we are seeing is, users first create a PyTorch model. When they are ready to deploy their model into production, they just convert it into a Caffe 2 model, then ship into either mobile or another platform.

Perhaps it’s not entirely surprising then that the 1.0 release intends to “… marry PyTorch and Caffe2 which gives the production-level readiness for PyTorch.” My understanding is that researchers (and others) retain the highly favorable benefit of developing in PyTorch but then, via the new JIT compiler, acquire the ability to deploy into production via Caffe2 or “… [export] to C++-only runtimes for use in larger projects”; thus PyTorch 1.0’s production reach extends to runtimes other than just Python-based ones – e.g., those runtimes that drive iOS, Android and other mobile devices. With TensorFlow already having emerged as the ‘gorilla of all frameworks’, the productionizing choice in the implementation of PyTorch will be well received by Facebook and other proponents of Caffe2.

The productionization of PyTorch also includes:

  • A C++ frontend – “… a pure C++ interface to the PyTorch backend that follows the API and architecture of the established Python frontend …” that “… is intended to enable research in high performance, low latency and bare metal C++ applications.”
  • Distributed PyTorch enhancements – Originally introduced in version 0.2.0 of PyTorch, “… the torch.distributed package … allows you to exchange Tensors among multiple machines.” Otherwise a core-competence of distributed TensorFlow, this ability to introduce parallelism via distributed processing becomes increasingly important as Deep Learning applications and their workflows transition from prototypes into production – e.g., as the demands of training escalate. In PyTorch 1.0, use of a new library (“C10D”) is expected to significantly enhance performance, while asynchronously enabling communications – even when use is made of the familiar-to-HPC-types Message Passing Interface (MPI).

In May 2018, over on Facebook’s developer-centric blog, Bill Jia posted:

Over the coming months, we’re going to refactor and unify the codebases of both the Caffe2 and PyTorch 0.4 frameworks to deduplicate components and share abstractions. The result will be a unified framework that supports efficient graph-mode execution with profiling, mobile deployment, extensive vendor integrations, and more.

As of this writing, a version 1 release candidate for PyTorch 1.0 is available via GitHub.

Stable releases for previous versions are available for ‘local’ or cloud use.

Key Takeaway: Why PyTorch Now!

Whereas it might’ve been a no-brainer to adopt TensorFlow as your go-to framework for all of your Deep Learning needs, I found early releases of PyTorch to be an effective enabler over a year ago – when it was only at the 0.2.0 release stage! Fortunately, the team behind PyTorch has continued to advance the capabilities offered – capabilities that are soon to officially include production-ready distributed processing. If you’re unaware of PyTorch, or bypassed it in the past, it’s likely worth another look right now.

Developing Your Expertise in Machine Learning: Podcasts for Breadth vs. Depth

From ad hoc to highly professional, there’s no shortage of resources when it comes to learning Machine Learning. Not only should podcasts be blatantly regarded as both viable and valuable resources, the two I cover in this post present opportunities for improving your breadth and/or depth in Machine Learning.

Machine Learning Guide

As a component of his own process for ramping up his knowledge and skills in the area of Machine Learning, OCDevel’s Tyler Renelle has developed an impressive resource of some 30 podcasts. Through this collection of episodes, Tyler’s is primarily a breadth play when it comes to the matter of learning Machine Learning, though he alludes to depth as well in how he positions his podcasts:

Where your other resources provide the machine learning trees, I provide the forest. Consider me your syllabus. At the end of every episode I provide high-quality curated resources for learning each episode’s details.

As I expect you’ll agree, with Tyler’s Guide, the purely audio medium of podcasting permits the breadth of Machine Learning to be communicated extremely effectively; in his own words, Tyler states:

Audio may seem inferior, but it’s a great supplement during exercise/commute/chores.

I couldn’t agree more. Even from the earliest of those episodes in this series, Tyler demonstrates the viability and value of this medium. In my opinion, he is particularly effective for at least three reasons:

  1. Repetition – Extremely important in any learning process, regardless of the medium, repetition is critical when podcasting is employed as a tool for learning.
  2. Analogies – Again, useful in learning regardless of the medium involved, yet extremely so in the case of podcasting. Imagine effective, simple, highly visual and sometimes fun analogies being introduced to explain, for example, a particular algorithm for Machine Learning.
  3. Enthusiasm – Perhaps a no-brainer, but enthusiasm serves to captivate interest and motivate action.

As someone who’s listened to each and every one of those 30 or so episodes, I can state with some assuredness that: We are truly fortunate that Tyler has expended the extra effort to share what he has learned in the hope that it’ll also help others. The quality of the Guide is excellent. If anything, I recall occasionally taking exception to some of the mathematical details related by Tyler. Because Tyler approaches this Guide from the perspective of an experienced developer, lapses mathematical in nature are extremely minor, and certainly do not detract from the overall value of the podcast.

After sharing his Guide, Tyler started up Machine Learning Applied:

an exclusive podcast series on practical/applied tech side of the same. Smaller, more frequent episodes.

Unfortunately, with only six episodes starting from May 2018, and none since mid-July, this more-applied series hasn’t yet achieved the stature of its predecessor. I share this more as a statement of fact than criticism, as sustaining the momentum to deliver such involved content on a regular cadence is not achieved without considerable effort – and, let’s be realistic, more than just a promise of monetization.

This Week in Machine Learning and AI

Whereas OCDevel’s Guide manifests itself as a one-person, breadth play, This Week in Machine Learning and AI (TWiML&AI) exploits the interview format in probing for depth. Built upon the seemingly tireless efforts of knowledgeable and skilled interviewer Sam Charrington, TWiML&AI podcasts allow those at the forefront of Machine Learning to share the details of their work – whether that translates to their R&D projects, business ventures or some combination thereof.

Like Tyler Renelle, Sam has a welcoming and nurturing style that allows him to ensure his guests are audience-centric in their responses – even if that means an episode is tagged with a ‘geek alert’ for those conversations that include mathematical details, for example. As someone who engages in original research in Machine Learning, I have learned a lot from TWiML&AI. Specifically, after listening to a number of episodes, I’ve followed up on show notes by delving a little deeper into something that sounded interesting; and on more than a few occasions, I’ve unearthed something of value for those projects I’m working on. Though Sam has interviewed some of the most well known in this rapidly evolving field, it is truly wonderful that TWiML&AI serves as an equal-opportunity platform – a platform that allows voices that might otherwise be marginalized to also be heard.

At this point, Sam and his team at TWIML&AI have developed a community around the podcast. The opportunity for deeper interaction exists through meetups, for example – meetups that have ranged from focused discussion on a particularly impactful research paper, to a facilitated study group in support of a course. In addition to all of this online activity, Sam and his team participate actively in a plethora of events, and have even been known to host events in person as well.

One last thought regarding TWiML&AI: The team here takes significant effort to ensure that each of the 185 episodes (and counting!) is well documented. While this is extremely useful, I urge you not to merely make your decision on what to listen to based upon teasers and notes alone. Stated differently, I can relate countless examples for which I perceived a very low level of interest prior to actually listening to an episode, only to be both surprised and delighted when I did. As I recall well my from my running days, run for that first kilometre or so (0.6214 of a mile 😉 ) before you make the decision as to how far you’ll run that day.

From the understandably predictable essentials of breadth, to the sometimes surprising and delightful details of depth, these two podcasts well illustrate the complementarity between the schools of breadth and depth. Based upon my experience, you’ll be well served by taking in both of these podcasts – whether you need to jumpstart or engage-in-continuous learning. Have a listen.

Prob & Stats Gaps: Sprinting for Closure

Prob & Stats Gap

When it comes to the mathematical underpinnings for Deep Learning, I’m extremely passionate. In fact, my perspective can be summarized succinctly:

Deep Learning – Deep Math = Deep Gap.

In reflecting upon my own mathematical credentials for Deep Learning, when it came to probability and statistics, I previously stated:

Through a number of courses in Time Series Analysis (TSA), my background affords me an appreciation for prob & stats. In other words, I have enough context to appreciate this need, and through use of quality, targeted resources (e.g., Goodfellow et al.’s textbook), I can close out the gaps sufficiently – in my case, for example, Bayes’ Rule and information theory.

Teaching to Learn

DSC02681Although I can certainly leverage quality, targeted resources, I wanted to share here a complementary approach. One reason for doing this is that resources such as Goodfellow et al.’s textbook may not be readily accessible to everyone – in other words, some homework is required before some of us are ready to crack open this excellent resource, and make sense of the prob & stats summary provided there.

So, in the spirit of progressing towards being able to leverage appropriate references such as Goodfellow et al.’s textbook, please allow me to share here a much-more pragmatic suggestion:

Tutor a few high school students in prob & stats to learn prob & stats.

Just in case the basic premise of this suggestion isn’t evident, it is: By committing to teaching prob & stats, you must be able to understand prob & stats. And as an added bonus, this commitment of tutoring each of a few students (say) once a week, establishes and reinforces a habit – a habit that is quite likely, in this case, to ensure you stick with your objective to broaden and deepen your knowledge/skills when it comes to probability and statistics.

As an added bonus, this is a service for which you could charge a fee – full rate for tutoring math at the high-school level to gratis, depending upon the value you’ll be able to offer your students … of course, a rate you could adjust over time, as your expertise with prob & stats develops.

Agile Sprints

Over recent years, I’ve found it particularly useful to frame initiatives such as this one in the form of Agile Sprints – an approach I’ve adopted and adapted from the pioneering efforts of J D Meier. To try this for yourself, I suggest the following two-step procedure:

  1. Review JD’s blog post on sprints – there’s also an earlier post of his that is both useful and relevant.
  2. Apply the annotated template I’ve prepared here to a sprint of your choosing. Because the sample template I’ve shared is specific to the prob & stats example I’ve been focused on in this post, I’ve also included a blank version of the sprint template here.

4DXit

Before you go, there’s one final point I’d like to draw your attention to – and that’s lead and lag measures. Whereas lag measures focus on your (wildly) important goal (WIG), lead measures emphasize those behaviors that’ll get you there. To draw from the example I shared for addressing a math gap in prob & stats, the lag measure is:

MUST have enhanced my knowledge/skills in the area of prob & stats such that I am better prepared to review Deep Learning staples such as Goodfellow et al.’s textbook

In contrast, examples of lead measures are each of the following:

SHOULD have sought tutoring positions with local and/or online services

COULD have acquired the textbook relevant for high-school level prob & stats

With appropriately crafted lead measures then, the likelihood that your WIG will be achieved is significantly enhanced. Kudos to Cal Newport for emphasizing the importance of acting on lead measures in his Deep Work book. For all four disciplines of execution, you can have a closer look at Newport’s book, or go to the 4DX source – the book or by simply Googling for resources on “the 4 disciplines of execution”.

Of course, the approach described here can be applied to much more than a gap in your knowledge/skills of prob & stats. And as I continue the process of self-curating my Data Science Portfolio, I expect to unearth additional challenges and opportunities – challenges and opportunities that can be well approached through 4DX’d Agile Sprints.

Ian Lumb’s Data Science Portfolio

Ian Lumb’s Data Science Portfolio

I had the very fortunate opportunity to present some of my research at GTC 2017 in Silicon Valley. Even after 3 months, I found GTC to be of lasting impact. However, my immediate response to the event was to reflect upon my mathematical credentials – credentials that would allow me to pursue Deep Learning with the increased breadth and depth demanded by my research project. I crystallized this quantitative reflection into a very simple question: Do I need to go back to school? (That is, back to school to enhance my mathematical credentials.)

There were a number of outcomes from this reflection upon my math creds for Deep Learning. Although the primary outcome was a mathematical ‘gap analysis’, a related outcome is this Data Science Portfolio that I’ve just started to develop. You see, after I reflected upon my mathematical credentials, it was difficult not to broaden and deepen that reflection; so, in a sense, this Data Science Portfolio is an outcome of that more-focused reflection.

As with the purely mathematical reflection, the effort I’m putting into self-curating my Data Science Portfolio allows me to showcase existing contributions (the easy part), but simultaneously raises interesting challenges and opportunities for future efforts (the difficult part). More on the future as it develops …

For now, the portfolio is organization into two broad categories:

  • Data Science Practitioner – intended to showcase my own contributions towards the practice of Data Science
  • Data Science Enabler – intended to showcase those efforts that have enabled other Data Scientists

At the end, and for now, there is a section on my academic background – a background that has shaped so much of those intersections between science and technology that have been captured in the preceding sections of the portfolio.

Although I expect there’ll be more to share as this portfolio develops, I did want to share one observation immediately: When placed in the context of a portfolio, immune to the chronological tyranny of time, it is fascinating to me to see themes that form an arc through seemingly unrelated efforts. One fine example is the matter of semantics. In representing knowledge, for example, semantics were critical to the models I built using self-expressive data (i.e., data successively encapsulated via XML, RDF and ultimately OWL). And then again, in processing data extracted from Twitter via Natural Language Processing (NLP), I’m continually faced with the challenge of ‘retaining’ a modicum of semantics in approaches based upon Machine Learning. I did not plan this thematic arc of semantics; it is therefore fascinating to see such themes exposed – exposed particularly well by the undertaking of portfolio curation.

There’s no shortage of Data Science portfolios to view. However one thing that’s certain, is that these portfolios are likely to be every bit as diverse and varied as Data Science itself, compounded by the uniqueness of the individuals involved. And that, of course, is a wonderful thing.

Thank you for taking the time to be a traveller at the outset of this journey with me. If you have any feedback whatsoever, please don’t hesitate to reach out via a comment and/or email to ian [DOT] lumb [AT] gmail [DOT] com. Bon voyage!

Genetic Aesthetics: Generative Software Meets Genetic Algorithms

I’m still reading Cloninger’s book, and just read a section on Generative Software (GS) – software used by contemporary designers to “… automate an increasingly large portion of the creative process.” As implied by the name, GS can produce a tremendous amount of output. It’s then up to the designer to be creatively stimulated as they sift through the GS output.

As I was reading Cloninger’s description, I couldn’t help but make my own connections with Genetic Algorithms (GAs). I’ve seen GAs applied in the physical sciences. For example, GAs can be used to generate models to fit data. The scientist provides an ancestor (a starting model), and then variations are derived through genetic processes such as mutation. Only the models with appropriate levels of fitness survive subsequent generations. Ultimately, what results is the best (i.e., most fit) model that explains the data according to the GA process.

In an analogous way, this is also what happens with the output from GS. Of course, in the GS case, it is the designer her/himself who determines what survives according to their own criteria.

The GS-GA connection is even stronger than my own association may cause you to believe.

In interviewing Joshua Davis for his book, Cloninger states:

At one point, you talked about creating software that would parse through the output of your generative software and select the iterations you were most likely to choose.

Davis responds:

That’s something [programmer] Branden Hall and I worked on called Genetic Aesthetic. It uses a neural network and genetic algorithms to create a “hot or not” situation. It says, “Rate this composition I generated on a scale from 1 to 10.” If I give it a 1, it says, “This isn’t beautiful. I should look at what kind of numbers were generated in this iteration and record those as unfavorable.” You have to train the software. Because the process is based on variables and numbers, over a very short period of time it’s able to learn what numbers are unsatisfactory and what numbers are satisfactory to that individual human critic. It changes per individual.

That certainly makes the GS-GA connection explicit and poetic, Genetic Aesthetic – I like that!

I’ve never worked with GAs. However, I did lead a project at KelResearch where our objective was to classify hydrometeors (i.e., raindrops, snowflakes, etc.). The hydrometeors were observed in situ by a sensor deployed on the wing of an airplane. Data was collected as the plane flew through winter storms. (Many of these campaigns were spearheaded by Prof. R. E. Stewart.) What we attempted to do was automate the classification of the hydrometeors on the basis of their shape. More specifically, we attempted to estimate the fractal dimension of each observed hydrometeor in the hopes of providing at automated classification scheme. Although this was feasible in principle, the resolution offered by the sensor made this impractical. Nonetheless, it was a interesting opportunity for me to personally explore the natural Genetic Aesthetics afforded by Canadian winter storms!