Data Scientist: Believe. Behave. Become.

A Litmus Test

When do you legitimately get to call yourself a Data Scientist?

How about a litmus test? You’re at a gathering of some type, and someone asks you:

So, what do you do?

At which point can you (or me, or anyone) respond with confidence:

I’m a Data Scientist.

I think the responding-with-confidence part is key here for any of us with a modicum of humility, education, experience, etc. I don’t know about you, but I’m certainly not interested in this declaration being greeted by judgmental guffaws, coughing spasms, involuntary eye motion, etc. Instead of all this overt ‘body language’, I’m sure we’d all prefer to receive an inquiring response along the lines of:

Oh, just what the [expletive deleted] is that?

Or, at least:

Dude, seriously, did you like, just make that up?

Responses to this very-legitimate, potentially disarming question, will need to be saved for another time – though I’m sure a quick Google search will reveal a just-what-the-[expletive deleted]-is-Data-Scientist elevator pitch.

To return to the question intended for this post however, let’s focus for a moment on how a best-selling author ‘became’ a writer.

“I’m a Writer”

I was recently listening to best-selling author Jeff Goins being interviewed by podcast host Srini Rao on an episode of the Unmistakable Creative. Although the entire episode (and the podcast in general, frankly) is well worth the listen, my purpose here is to extract the discussion relating to Goins’ own process of becoming a writer. In this episode of the podcast, Goins recalls the moment when he believed he was a writer. He then set about behaving as a writer – essentially, the hard work of showing up every single day just to write. Goins continues by explaining how based upon his belief (“I am writer”) and his behavior (i.e., the practice of writing on a daily basis), he ultimately realized his belief through his actions (behavior) and became a writer. With five, best selling books to his credit, plus a high-traffic-blog property, and I’m sure much more, it’s difficult now to dispute Goins’ claim of being a writer.

Believe. Behave. Become. Sounds like a simple enough algorithm, so in the final section of this post, I’ll apply it to the question posed at the outset – namely:

When do you legitimately get to call yourself a Data Scientist?

I’m a Data Scientist?

I suppose, then, that by direct application of Goins’ algorithm, you can start the process merely by believing you’re a Data Scientist. Of course, I think we all know that that’ll only get you so far, and probably not even to a first interview. More likely, I think that most would agree that we need to have some Data Science chops before we would even entertain such an affirmation – especially in public.

And this is where my Data Science Portfolio enters the picture – in part, allowing me to self-validate, to legitimize whether or not I can call myself a Data Scientist in public without the laughing, choking or winking. What’s interesting though is that in order to work through Goins’ algorithm, engaging in active curation of a Data Science portfolio is causing me to work backwards – making use of hindsight to validate that I have ‘arrived’ as a Data Scientist:

  • Become – Whereas I don’t have best sellers or even a high-traffic blog site to draw upon, I have been able to assemble a variety of relevant artifacts into a Portfolio. Included in the Portfolio are peer-reviewed articles that have appeared in published journals with respectable impact factors. This, for a Data Scientist, is arguably a most-stringent validation of an original contribution to the field. However, chapters in books, presentations at academic and industry events, and so on, also serve as valuable demonstrations of having become a Data Scientist. Though it doesn’t apply to me (yet?), the contribution of code would also serve as a resounding example – with frameworks such as Apache Hadoop, Apache Spark, PyTorch, and TensorFlow serving as canonical and compelling examples.
  • Behave – Not since the time I was a graduate student have I been able to show up every day. However, recognizing the importance of deliberate practice, there have been extended periods during which I have shown up every day (even if only for 15 minutes) to advance some Data Science project. In my own case, this was most often the consequence of holding down a full-time job at the same time – though in some cases, as is evident in the Portfolio, I have been able to work on such projects as a part of my job. Such win-win propositions can be especially advantageous for the aspiring Data Scientist and the organization s/he represents.
  • Believe – Perhaps the most important outcome of engaging in the deliberate act of putting together my Data Science Portfolio, is that I’m already in a much more informed position, and able to make a serious ‘gut check’ on whether or not I can legitimately declare myself a Data Scientist right here and right now.

The seemingly self-indulgent pursuit of developing my own Data Science Portfolio, an engagement of active self-curation, has (quite honestly) both surprised and delighted me; I clearly have been directly involved in the production of a number of artifacts that can be used to legitimately represent myself as ‘active’ in the area of Data Science. The part-time nature of this pursuit, especially since the completion of grad school (though with a few notable exceptions), has produced a number of outcomes that can be diplomatically described as works (still) in progress … and in some cases, that is unfortunate.

Net-net, there is some evidence to support a self-declaration as a Data Scientist – based upon artifacts produced, and implied (though inconsistent) behaviors. However, when asked the question “What do you do?”, I am more likely to respond that:

I am a demonstrably engaged and passionate student of Data Science – an aspiring Data Scientist, per se … one who’s actively working on becoming, behaving and ultimately believing he’s a Data Scientist.

Based on my biases, that’s what I currently feel owing to the very nature of Data Science itself.

Remembering a Supportive Sibling

Remembering a Supportive Sibling

Less than a week before I was scheduled to deliver my first presentation on a novel way for approaching an outstanding challenge in seismic processing, my younger sister Deborah passed away. She was only 50. Thanks to medical care that included extensive chemotherapy, Debbie recovered from lymphoma once, and was declared cancer free. However, a second wave of lymphoma accompanied by leukemia, proved to be more than she could handle – and we lost her during a procedure that (ironically) was attempting to provide more information about the cancers that had literally taken over her body.

Between Debbie’s passing and her funeral, was not only a about a week’s lapse of time, but the need for me to make a decision – a decision to present as scheduled at the 2015 Rice University Oil & Gas Conference in Houston or miss the event entirely. A complicating factor in my ability to make this decision was that I truly was the only person who could deliver it. That’s more a pragmatic statement than a boastful one, as I had combined my background in geophysics with an increasing knowledge of Big Data Analytics; in so doing, I’d arrived at a submission for the RiceU Conference that was as uniquely of my creation as it was a disruptive suggestion – in other words, something I felt strongly to be well suited to the Conference’s Disruptive Technology Track. With the Conference being less than a week away, most of the real work had already been completed; in other words, all I needed to do was show up, make a two-minute presentation, and discuss the poster I’d prepared with those who expressed an interest.

Debbie was always highly supportive and encouraging when it came to ‘things’ like this – the expression of something worth sharing. This, despite the fact that she and I were on completely different trajectories when it came to our intellectual interests and pursuits – me in the physical sciences and technology, while Debbie favoured English literature. Despite these differences, Debbie often made a point of trying to understand and appreciate what I was working on – no matter how geekily obscure.

In recalling these traits of hers, her sincere interest in what I was doing (I suppose) just because we were siblings, my decision to follow through with the presentation at the RiceU Conference was a relatively easy one. Executing it, however, was at times challenging … and I could not have followed through without the support of my colleagues from Bright Computing.

You can still review my two-minute presentation here thanks to the wonderful people who run this industry-leading event on an annual basis at Rice. The poster I alluded to is available here. The ideas hatched through these 2015 communications proved instrumental in spinning off additional contributions. Equally important, were those interactions initiated at this 2015 RiceU Conference. Some of these interactions resulted in relationships that persist through today – relationships that have, for example, resulted me applying Machine Learning to problems of scientific interest.

And so it is, on the occasion of what would’ve been Debbie’s 54th birthday, that I wistfully remember my sister. Without knowing that I’d have had her support and encouragement, I likely wouldn’t have followed through with that March 2015 presentation at the RiceU Conference – a decision that had immediate plus long-lasting implications to my progression as a Data Scientist.

How I Ended Up in Geophysical Fluid Dynamics

How I Ended Up in Geophysical Fluid Dynamics

Lately, I’ve been disclosing the various biases I bring to practicing and enabling Data Science. Motivated by my decision to (finally) self-curate an online, multimedia portfolio, I felt such biases to be material in providing the context that frames this effort. Elsewhere, I’ve shared my inherently scientific bias. In this post, I want to provide additional details. These details I’ve been able to extract verbatim from a blog post I wrote for Bright Computing in January 2015; once I’d settled on geophysics (see below), I aspired to be a seismologist … but, as you’ll soon find out, things didn’t pan out quite the way I’d expected:

I always wanted to be a seismologist.

Scratch that: I always wanted to be an astronaut. How could I help it? I grew up in suburban London (UK, not Ontario) watching James Burke cover the Apollo missions. (Guess I’m also revealing my age here!)

Although I never gave my childhood dream of becoming an astronaut more than a fleeting consideration, I did pursue a career in science.

As my high-school education drew to a close, I had my choices narrowed down to being an astronomer, geophysicist or a nuclear physicist. In grade 12 at Laurier Collegiate in Scarboro (Ontario, not UK … or elsewhere), I took an optional physics course that introduced me to astronomy and nuclear physics. And although I was taken by both subjects, and influenced by wonderful teachers, I dismissed both of these as areas of focus in university. As I recall, I had concerns that I wouldn’t be employable if I had a degree in astronomy, and I wasn’t ready to confront the ethical/moral/etc. dilemmas I expected would accompany a choice of nuclear physics. Go figure!

And so it was to geophysics I was drawn, again influenced significantly by courses in physical geography taught by a wonderful teacher at this same high school. My desire to be a seismologist persisted throughout my undergraduate degree at Montreal’s McGill Universitywhere I ultimately graduated with a B.Sc. in solid Earth geophysics. Armed with my McGill degree, I was in a position to make seismology a point of focus.

But I didn’t. Instead, at Toronto’s York University, I applied Geophysical Fluid Dynamics (GFD) to Earth’s deep interior – mostly Earth’s fluid outer core. Nothing superficial here (literally), as the core only begins some 3,000 km below where we stand on the surface!

Full disclosure: In graduate school, the emphasis was GFD. However, seismology crept in from time to time. For example, I made use of results from deep-Earth seismology in estimating the viscosity of Earth’s fluid outer core. Since this is such a deeply remote region of our planet, geophysicists need to content themselves with observations accessible via seismic and other methods.

From making use of Apache Spark to improve the performance of seismic processing (search for “Reverse-Time Seismic Migration” or “RTM” in my Portfolio), to the analysis of ‘seismic data’ extracted from Twitter (search for “Twitter”in my Portfolio), seismology has taken center stage in a number of my projects as a practitioner of Data Science. However, so has the geophysical fluid dynamics of Earth’s mantle and outer core. Clearly, you can have your geeky cake and eat it too!

Data Science: Identifying My Professional Bias

Data Science: Identifying My Professional Bias

In the Summer of 1984, I arrived at Toronto’s York University as a graduate student in Physics & Astronomy. (Although my grad programme was Physics & Astronomy, my research emphasized the application of fluid dynamics to Earth’s deep interior.) Some time after that, I ran my first non-interactive computation on a cluster of VAX computers. I’m not sure if this was my first exposure to Distributed Computing or not not; I am, however, fairly certain that this was the first time it (Distributed Computing) registered with me as something exceedingly cool, and exceedingly powerful.

Even back in those days, armed with nothing more than a VT100 terminal ultimately connected to a serial interface on one of the VAXes, I could be logged in and able to submit a computational job that might run on some other VAX participating in the cluster. The implied connectedness, the innate ability to make use of compute cycles on some ‘remote’ system was intellectually intoxicating – and I wasn’t even doing any parallel computing (yet)!

More than a decade later, while serving in a staff role as a computer coordinator, I became involved in procuring a modest supercomputer for those members of York’s Faculty of Pure & Applied Science who made High Performance Computing (HPC) a critical component of their research. If memory serves me correctly, this exercise resulted in the purchase of a NUMA-architecture system from SGI powered by MIPS CPUs. Though isolated initially, and as a component of the overall solution, Platform LSF was included to manage the computational workloads that would soon consume the resources of this SGI system.

The more I learned about Platform LSF, the more I was smitten by the promise and reality of Distributed Computing – a capability to be leveraged from a resource-centric perspective with this Load Sharing Facility (LSF). [Expletive deleted], Platform founder Songnian Zhou expressed the ramifications of his technical vision for this software as Utopia in a 1993 publication. Although buying the company wasn’t an option, I did manage to be hired by Platform, and work there in various roles for about seven-and-a-half years.

Between my time at Platform (now an IBM company) and much-more recently Univa, over a decade of my professional experience has been spent focused on managing workloads in Distributed Computing environments. From a small handful of VAXes, to core counts that have reached 7 figures, these environments have included clusters, grids and clouds.

My professional bias towards Distributed Computing was further enhanced through the experience of being employed by two software vendors who emphasized the management of clusters – namely Scali (Scali Manage) and subsequently Bright Computing (Bright Cluster Manager). Along with Univa (Project Tortuga and Navops Launch), Bright extended their reach to the management of HPC resources in various cloud configurations.

If it wasn’t for a technical role at Allinea (subsequently acquired by ARM), I might have ended up ‘stuck in the middle’ of the computational stack – as workload and cluster management is regarded by the HPC community (at least) as middleware … software that exists between the operating environment (i.e., the compute node and its operating system) and the toolchain (e.g., binaries, libraries) that ultimately support applications and end users (e.g., Figure 5 here).

Allinea’s focuses on tools to enable HPC developers. Although they were in the process of broadening their product portfolio to include a profiling capability around the time of my departure, in my tenure there the emphasis was on a debugger – a debugger capable of handling code targeted for (you guessed it) Distributed Computing environments.

Things always seemed so much bigger when we were children. Whereas Kid Ian was impressed by a three-node VAX cluster, and later ‘blown away’ by a modest NUMA-architecture ‘supercomputer’, Adult Ian had the express privilege of running Allinea DDT on some of the largest supercomputers on the planet (at the time) – tracking down a bug that only showed up when more than 20K cores were used in parallel on one of Argonne’s Blue Genes, and demonstrating scalable, parallel debugging during a tutorial on some 700K cores of NCSA’s Blue Waters supercomputer. In hindsight, I can’t help but feel humbled by this impressive capability of Allinea DDT to scale to these extremes. Because HPC’s appetite for scale has extended beyond tera and petascale capabilities, and is eyeing seriously the demand to perform at the exascale, software like Allinea DDT needs also to match this penchant for extremely extreme scale.

At this point, suffice it to say that scalable Distributed Computing has been firmly encoded into my professional DNA. As with my scientifically based academic bias, it’s difficult not to frame my predisposition towards Distributed Computing in a positive light within the current context of Data Science. Briefly, it’s a common experience for the transition from prototype-to-production to include the introduction of Distributed Computing – if not only to merely execute applications and/or their workflows on more powerful computers, but perhaps to simultaneously scale these in parallel.

I anticipate the need to return to this disclosure regarding the professional bias I bring to Data Science. For now though, calling out the highly influential impact Distributed Computing has had on my personal trajectory, appears warranted within the context of my Data Science Portfolio.

Data Science: Celebrating Academic Personal Bias

Data Science: Celebrating My Academic Bias

In a recent post, I introduced my Data Science Portfolio. After describing the high-level organization of the Portfolio, I noted:

At the end, and for now, there is a section on my academic background – a background that has shaped so much of those intersections between science and technology that have been captured in the preceding sections of the portfolio.

Even in this earliest of drafts, I knew that I was somewhat uncomfortable with a section dedicated to academics in my Portfolio. After all shouldn’t a portfolio place more emphasis on how my knowledge and skills, academic or otherwise, have been applied to produce some tangible artifact?

Upon further reflection, I currently believe what’s material in the context of a portfolio is some indication of the bias inherent in the resulting curated showcase of one’s work. Of course to some degree the works presented, and the curation process itself, will make self-evident such personal bias.

Whereas it may make sense for an artist not to overtly disclose any bias with respect to their craft, or a curated collection their work, I currently perceive absolutely no downside in sharing my personal bias – a bias that in my own case, I believe reflects only in positive ways on the Portfolio as well as the individual items included in it.

To this end, and in the spirit of such a positive self-disclosure, my personal bias reflects my formative years in science – a background to which I well recall significant contributions from high school, that were subsequently broadened and deepened as an undergraduate and then graduate student. Even more specifically in terms of personal bias was my emphasis on the physical sciences; a bias that remains active today.

As I’ve started to share, through such posts as the one on the mathematical credentials I bring to Data Science, my choice to pursue the physical sciences was an excellent one – even through the self-critical lens of personal hindsight. An excellent choice, but albeit a biased one.

The very nature of Data Science is such that each of us carries with us our own, wonderfully unique personal bias. As we necessarily collaborate in team, project and organizational settings, I believe it’s important to not only ensure each of us preserves their personal bias, but that we leverage this perspective as fully and appropriately as possible. As a consequence it is much more likely that everyone we work with, and everything we work on, will derive maximal value.

Ian Lumb’s Data Science Portfolio

Ian Lumb’s Data Science Portfolio

I had the very fortunate opportunity to present some of my research at GTC 2017 in Silicon Valley. Even after 3 months, I found GTC to be of lasting impact. However, my immediate response to the event was to reflect upon my mathematical credentials – credentials that would allow me to pursue Deep Learning with the increased breadth and depth demanded by my research project. I crystallized this quantitative reflection into a very simple question: Do I need to go back to school? (That is, back to school to enhance my mathematical credentials.)

There were a number of outcomes from this reflection upon my math creds for Deep Learning. Although the primary outcome was a mathematical ‘gap analysis’, a related outcome is this Data Science Portfolio that I’ve just started to develop. You see, after I reflected upon my mathematical credentials, it was difficult not to broaden and deepen that reflection; so, in a sense, this Data Science Portfolio is an outcome of that more-focused reflection.

As with the purely mathematical reflection, the effort I’m putting into self-curating my Data Science Portfolio allows me to showcase existing contributions (the easy part), but simultaneously raises interesting challenges and opportunities for future efforts (the difficult part). More on the future as it develops …

For now, the portfolio is organization into two broad categories:

  • Data Science Practitioner – intended to showcase my own contributions towards the practice of Data Science
  • Data Science Enabler – intended to showcase those efforts that have enabled other Data Scientists

At the end, and for now, there is a section on my academic background – a background that has shaped so much of those intersections between science and technology that have been captured in the preceding sections of the portfolio.

Although I expect there’ll be more to share as this portfolio develops, I did want to share one observation immediately: When placed in the context of a portfolio, immune to the chronological tyranny of time, it is fascinating to me to see themes that form an arc through seemingly unrelated efforts. One fine example is the matter of semantics. In representing knowledge, for example, semantics were critical to the models I built using self-expressive data (i.e., data successively encapsulated via XML, RDF and ultimately OWL). And then again, in processing data extracted from Twitter via Natural Language Processing (NLP), I’m continually faced with the challenge of ‘retaining’ a modicum of semantics in approaches based upon Machine Learning. I did not plan this thematic arc of semantics; it is therefore fascinating to see such themes exposed – exposed particularly well by the undertaking of portfolio curation.

There’s no shortage of Data Science portfolios to view. However one thing that’s certain, is that these portfolios are likely to be every bit as diverse and varied as Data Science itself, compounded by the uniqueness of the individuals involved. And that, of course, is a wonderful thing.

Thank you for taking the time to be a traveller at the outset of this journey with me. If you have any feedback whatsoever, please don’t hesitate to reach out via a comment and/or email to ian [DOT] lumb [AT] gmail [DOT] com. Bon voyage!

Recent Workshop: Nurturing Quantitative Skills for the Physical Sciences through use of Scientific Models

unst_water_oilA few weeks back, I delivered a workshop at a conference focused on online learning. Unfortunately, abstracts were not made available via the event’s web site. In lieu of directing you elsewhere then, below is the abstract I submitted:

Nurturing Quantitative Skills for the Physical Sciences through use of Scientific Models

L. I. Lumb
Division of Natural Science, Faculty of Science, York University
Toronto, Ontario, Canada

With numerous scientists asserting that we have entered into The Anthropocene, a ‘brand new’ Geologic Epoch that underscores human impact on planet Earth, there has arguably never been a more relevant time for literacy in the physical sciences. Complicating this, however, is the implied need for quantitative skills demanded of those who seek to have more than a superficial degree of literacy in matters relating to climate or global change. Grounded by direct, personal experience in teaching science to non-scientists at the undergraduate university level, and independently validated by academic research into Science Technology Engineering Math (STEM) related programs and subjects, mastery of even the most-basic quantitative skills presents a well-established challenge in engaging learners at levels beyond the quantitatively superficial – a challenge that appears to be increasingly the case with the arriving cohort of undergraduates each Fall. In an effort to systematically develop and encourage proficiency in quantitative skills in data-rich courses in the physical sciences, a number of scientific models have been introduced by the author. Ranging from embarrassingly simple physical models using rice to investigate relative humidity in Earth’s atmosphere, to software-based models that employ spreadsheets to elucidate aspects of climate and global change, the use of scientific models presents intriguing challenges and opportunities for both instructors and students; needless to state, these challenges and opportunities can be significantly exacerbated in courses that are delivered online to numbers in excess of 100 students. After an introduction of scientific models as a pedagogical vehicle for nurturing quantitative skills, emphasis shifts to the sharing of real-world experiences with this approach in relatively large, online courses in physical sciences taught at the undergraduate level to non-majors (and therefore non-scientists). In ultimately working towards the primary example of a relatively simple, yet scientifically appropriate spreadsheet model for the Paris Climate Agreement, participants’ involvement will be scaffolded through use of other examples of models that have also been used in practice. Participants will also be encouraged to engage in a dialogue that compares and contrasts these models with more traditional approaches (e.g., formal essays). Finally, armed with some context for models as a pedagogical vehicle for quantitatively enhancing student engagement, participants will be guided through exercises that will allow them to develop their own models for their own teaching and learning requirements – whether their interests fall within or beyond scientifically oriented disciplines.

As you can see, I have a vested interest in nurturing quantitative skills, and models is one of the vehicles I make use of. If you share similar interests or better yet, if you have ideas as to what’s worked for you, please feel free to comment.