Developing Your Expertise in Machine Learning: Podcasts for Breadth vs. Depth

From ad hoc to highly professional, there’s no shortage of resources when it comes to learning Machine Learning. Not only should podcasts be blatantly regarded as both viable and valuable resources, the two I cover in this post present opportunities for improving your breadth and/or depth in Machine Learning.

Machine Learning Guide

As a component of his own process for ramping up his knowledge and skills in the area of Machine Learning, OCDevel’s Tyler Renelle has developed an impressive resource of some 30 podcasts. Through this collection of episodes, Tyler’s is primarily a breadth play when it comes to the matter of learning Machine Learning, though he alludes to depth as well in how he positions his podcasts:

Where your other resources provide the machine learning trees, I provide the forest. Consider me your syllabus. At the end of every episode I provide high-quality curated resources for learning each episode’s details.

As I expect you’ll agree, with Tyler’s Guide, the purely audio medium of podcasting permits the breadth of Machine Learning to be communicated extremely effectively; in his own words, Tyler states:

Audio may seem inferior, but it’s a great supplement during exercise/commute/chores.

I couldn’t agree more. Even from the earliest of those episodes in this series, Tyler demonstrates the viability and value of this medium. In my opinion, he is particularly effective for at least three reasons:

  1. Repetition – Extremely important in any learning process, regardless of the medium, repetition is critical when podcasting is employed as a tool for learning.
  2. Analogies – Again, useful in learning regardless of the medium involved, yet extremely so in the case of podcasting. Imagine effective, simple, highly visual and sometimes fun analogies being introduced to explain, for example, a particular algorithm for Machine Learning.
  3. Enthusiasm – Perhaps a no-brainer, but enthusiasm serves to captivate interest and motivate action.

As someone who’s listened to each and every one of those 30 or so episodes, I can state with some assuredness that: We are truly fortunate that Tyler has expended the extra effort to share what he has learned in the hope that it’ll also help others. The quality of the Guide is excellent. If anything, I recall occasionally taking exception to some of the mathematical details related by Tyler. Because Tyler approaches this Guide from the perspective of an experienced developer, lapses mathematical in nature are extremely minor, and certainly do not detract from the overall value of the podcast.

After sharing his Guide, Tyler started up Machine Learning Applied:

an exclusive podcast series on practical/applied tech side of the same. Smaller, more frequent episodes.

Unfortunately, with only six episodes starting from May 2018, and none since mid-July, this more-applied series hasn’t yet achieved the stature of its predecessor. I share this more as a statement of fact than criticism, as sustaining the momentum to deliver such involved content on a regular cadence is not achieved without considerable effort – and, let’s be realistic, more than just a promise of monetization.

This Week in Machine Learning and AI

Whereas OCDevel’s Guide manifests itself as a one-person, breadth play, This Week in Machine Learning and AI (TWiML&AI) exploits the interview format in probing for depth. Built upon the seemingly tireless efforts of knowledgeable and skilled interviewer Sam Charrington, TWiML&AI podcasts allow those at the forefront of Machine Learning to share the details of their work – whether that translates to their R&D projects, business ventures or some combination thereof.

Like Tyler Renelle, Sam has a welcoming and nurturing style that allows him to ensure his guests are audience-centric in their responses – even if that means an episode is tagged with a ‘geek alert’ for those conversations that include mathematical details, for example. As someone who engages in original research in Machine Learning, I have learned a lot from TWiML&AI. Specifically, after listening to a number of episodes, I’ve followed up on show notes by delving a little deeper into something that sounded interesting; and on more than a few occasions, I’ve unearthed something of value for those projects I’m working on. Though Sam has interviewed some of the most well known in this rapidly evolving field, it is truly wonderful that TWiML&AI serves as an equal-opportunity platform – a platform that allows voices that might otherwise be marginalized to also be heard.

At this point, Sam and his team at TWIML&AI have developed a community around the podcast. The opportunity for deeper interaction exists through meetups, for example – meetups that have ranged from focused discussion on a particularly impactful research paper, to a facilitated study group in support of a course. In addition to all of this online activity, Sam and his team participate actively in a plethora of events, and have even been known to host events in person as well.

One last thought regarding TWiML&AI: The team here takes significant effort to ensure that each of the 185 episodes (and counting!) is well documented. While this is extremely useful, I urge you not to merely make your decision on what to listen to based upon teasers and notes alone. Stated differently, I can relate countless examples for which I perceived a very low level of interest prior to actually listening to an episode, only to be both surprised and delighted when I did. As I recall well my from my running days, run for that first kilometre or so (0.6214 of a mile 😉 ) before you make the decision as to how far you’ll run that day.

From the understandably predictable essentials of breadth, to the sometimes surprising and delightful details of depth, these two podcasts well illustrate the complementarity between the schools of breadth and depth. Based upon my experience, you’ll be well served by taking in both of these podcasts – whether you need to jumpstart or engage-in-continuous learning. Have a listen.

Livescribe Pencasting: Seizing Uncertainty from Success

Echo’es of a Glorified Past

[Optional musical accompaniment: From their album Meddle, Pink Floyd’s Echoes (via their Youtube channel).]

I first learned about pencasting from an elementary-school teacher at a regional-networking summit in 2011.

It took me more than a year to acquire the technology and start experimenting. My initial experiences, in making use of this technology in a large, first-year course at the university level, were extremely encouraging; and after only a week’s worth of experimentation, my ‘findings’ were summarized as follows:

  • Decent-quality pencasts can be produced with minimal effort
  • Pencasts compliment other instructional media
  • Pencasts allow the teacher to address both content and skills-oriented objectives
  • Pencasts are learner-centric
  • I’m striving for authentic, not perfect pencasts

The details that support each of these findings are provided in my April 2012 blog post. With respect to test driving the pencasts in a large-lecture venue, I subsequently shared:

  • The visual aspects of the pencast are quite acceptable
  • The audio quality of the pencasts is very good to excellent
  • One-to-many live streaming of pencasts works well
  • Personal pencasting works well

Again, please refer to my original post in October 2012 for the details.

Over the next year or so, I must’ve developed of order 20 pencasts using the Livescribe Echo smartpen – please see below for a sample. Given that the shortest of these pencasts ran 15-20 minutes, my overall uptake and investment in the technology was significant. I unexpectedly became ‘an advocate for the medium’, as I shared my pencasts with students in the courses I was teaching, colleagues who were also instructing in universities and high schools, plus textbook publishers. At one point, I even had interest from both the publisher and an author of the textbook I was using in my weather and climate class to develop a few pencasts – pencasts that would subsequently be made available as instructional media to any other instructor who was making use of this same textbook.

[Sample pencast: Download a mathematical example – namely, Hydrostatic Equation – Usage Example – 2013-06-15T12-10-06-0. Then, use the Livescribe player, desktop, iOS or Android app to view/listen.]

The Slings and Arrows of Modernization

Unfortunately, all of this changed in the Summer of 2015. Anticipating the impending demise of Adobe Flash technology, in what was marketed as a ‘modernization’ effort, Livescribe rejected this one-time staple in favour of their own proprietary appropriation of the Adobe PDF. Along with the shift to the Livescribe-proprietary format for pencasts then, was an implicit requirement to make use of browser, desktop or mobile apps from this sole-source vendor. As if these changes weren’t enough, Livescribe then proceeded to close its online community – the vehicle through which many of us were sharing our pencasts with our students, colleagues, etc. My frustration was clearly evident in a comment posted to Livescribe’s blog in September 2014:

This may be the tipping point for me and Livescribe products – despite my investment in your products and in pencast development … I’ve been using virtual machines on my Linux systems to run Windows so that I can use your desktop app. The pay off for this inconvenience was being able to share pencasts via the platform-neutral Web. Your direction appears to introduce complexities that translate to diminishing returns from my increasingly marginalized Linux/Android perspective …

From the vantage point of hindsight in 2018, and owing to the ongoing realization of the demise of Flash, I fully appreciate that Livescribe had to do something about the format they employed to encode their pencasts; and, given that there aren’t any open standards available (are there?), they needed to develop their own, proprietary format. What remains unfortunate, however, is the implicit need to make use of their proprietary software to actually view and listen to the pencasts. As far as I can tell, their browser-based viewer still doesn’t work on popular Linux-based platforms (e.g., Ubuntu), while you’ll need to have a Microsoft Windows or Apple Mac OS X based platform to make use of their desktop application. Arguably, the most-positive outcome from ‘all of this’ is that their apps for iOS and Android devices are quite good. (Of course, it took them some time before the Android app followed the release of the iOS app.)

Formats aside, the company’s decision to close its community still, from the vantage point of 2018, strikes me as a strategic blunder of epic proportions. (Who turns their back on their community and expects to survive?) Perhaps they (Livescribe) didn’t want to be in the community-hosting business themselves. And while I can appreciate and respect that position, alternatives were available at the time, and abound today.

Pencasting Complexified

[Full disclosure: I neither own, nor have I used the Livescribe 3 smartpen alluded to in the following paragraph. In other words, this is my hands-off take on the smartpen. I will happily address factual errors.]

At one point, and in my opinion, the simplicity of the Livescribe Echo smartpen was its greatest attribute. As a content producer, all I needed was the pen and one of Livescribe’s proprietary notebooks, plus a quiet place in which to record my pencasts. Subsequent innovation from the company resulted in the Livescribe 3 smartpen. Though it may well be designed “… to work and write like a premium ballpoint pen …”, the complexity introduced now requires the content producer to have the pen, the notebook, a bluetooth headset plus an iOS or Android device to capture pencasts. In this case, there is a serious price to be paid for modernization – both figuratively and literally.

According to Wikipedia, the Livescribe 3 smartpen was introduced in November 2013. And despite the acquisition by Anoto about two-years later, innovation appears to have ceased. So much for first-mover advantage, and Jim Marggraff’s enviable track record of innovation.

My need to pencast remains strong – even in 2018. If you’ve read this far, I’m sure you’ll understand why I might be more than slightly reluctant to fork out the cash for a Livescribe 3 smartpen. There may be alternatives, however; and I do expect that future posts may share my findings, lessons learned, best practices, etc.

Feel free to weigh in on this post via the comments – especially if you have alternatives to suggest. Please note: Support for Linux highly desirable.

Demonstrating Your Machine Learning Expertise: Optimizing Breadth vs. Depth

Developing Expertise

When it comes to developing your expertise in Machine Learning, there seem to be two schools of thought:

  • Exemplified by articles that purport to have listed, for example, the 10-most important methods you need to know to ace a Machine Learning interview, the School of Breadth emphasizes content-oriented objectives. By amping up with courses/workshops to programs (e.g., certificates, degrees) then, the justification for broadening your knowledge of Machine Learning is self-evident.
  • Find data that interests you, and work with it using a single approach for Machine Learning. Thus the School of Depth emphasizes skills-oriented objectives that are progressively mastered as you delve into data, or better yet, a problem of interest.

Depending upon whichever factors you currently have under consideration then (e.g., career stage, employment status, desired employment trajectory, …), breadth versus depth may result in an existential crisis when it comes to developing and ultimately demonstrating your expertise in Machine Learning – with a modicum of apologies if that strikes you as a tad melodramatic.

Demonstrating Expertise

Somewhat conflicted, at least, is in all honesty how I feel at the moment myself.

On Breadth

Even a rapid perusal of the Machine Learning specific artifacts I’ve self-curated into my online, multimedia Data Science Portfolio makes one thing glaringly evident: The breadth of my exposure to Machine Learning has been somewhat limited. Specifically, I have direct experience with classification and Natural Language Processing in Machine Learning contexts from the practitioner’s perspective. The more-astute reviewer, however, might look beyond the ‘pure ML’ sections of my portfolio and afford me additional merit for (say) my mathematical and/or physical sciences background, plus my exposure to concepts directly or indirectly applicable to Machine Learning – e.g., my experience as a scientist with least-squares modeling counting as exposure at a conceptual level to regression (just to keep this focused on breadth, for the moment).

True confession: I’ve started more than one course in Machine Learning in a blunt-instrument attempt to address this known gap in my knowledge of relevant methods. Started is, unfortunately, the operative word, as (thus far) any attempt I’ve made has not been followed through – even when there are options for community, accountability, etc. to better-ensure success. (Though ‘life got in the way’ of me participating fully in the fast.ai study group facilitated by the wonderful team that delivers the This Week in Machine Learning & AI Podcast, such approaches to learning Machine Learning are appealing in principle – even though my own engagement was grossly inconsistent.)

On Depth

What then about depth? Taking the self-serving but increasingly concrete example of my own Portfolio, it’s clear that (at times) I’ve demonstrated depth. Driven by an interesting problem aimed at improving tsunami alerting by processing data extracted from Twitter, for example, the deepening progression with co-author Jim Freemantle has been as follows:

  1. Attempt to apply existing knowledge-representation framework to the problem by extending it (the framework) to include graph analytics
  2. Introduce tweet classification via Machine Learning
  3. Address the absence of semantics in the classification-based approach through the introduction of Natural Language Processing (NLP) in general, and embedded word vectors in particular
  4. Next steps …

(Again, please refer to my Portfolio for content relating to this use case.) Going deeper, in this case, is not a demonstration of a linear progression; rather, it is a sequence of outcomes realized through experimentation, collaboration, consultation, etc. For example, the seed to introduce Machine Learning into this tsunami-alerting initiative was planted on the basis of informal discussions at an oil and gas conference … and later, the introduction of embedded word vectors, was similarly the outcome of informal discussions at a GPU technology conference.

Whereas these latter examples are intended primarily to demonstrate the School of Depth, it is clear that the two schools of thought aren’t mutually exclusive. For example, in delving into a problem of interest Jim and I may have deepened our mastery of specific skills within NLP, however we have also broadened our knowledge within this important subdomain of Machine Learning.

One last thought here on depth. At the outset, neither Jim nor I had as an objective any innate desire to explore NLP. Rather, the problem, and more importantly the demands of the problem, caused us to ‘gravitate’ towards NLP. In other words, we are wedded more to making scientific progress (on tsunami alerting) than a specific method for Machine Learning (e.g., NLP).

Next Steps

Net-net then, it appears to be that which motivates us that dominates in practice – in spite, perhaps, of our best intentions. In my own case, my existential crisis derives from being driven by problems into depth, while at the same time seeking to demonstrate a broader portfolio of expertise with Machine Learning. To be more specific, there’s a part of me that wants to apply LSTMs (foe example) to the tsunami-alerting use case, whereas another part knows I must broaden (at least a little!) my portfolio when it comes to methods applicable to Machine Learning.

Finally then, how do I plan to address this crisis? For me, it’ll likely manifest itself as a two-pronged approach:

  1. Enrol and follow through on a course (at least!) that exposes me to one or more methods of Machine Learning that compliments my existing exposure to classification and NLP.
  2. Identify a problem, or problems of interest, that allow me to deepen my mastery of one or more of these ‘newly introduced’ methods of Machine Learning.

In a perfect situation, perhaps we’d emphasize breadth and depth. However, when you’re attempting to introduce, pivot, re-position, etc. yourself, a trade off between breadth versus depth appears to be inevitable. An introspective reflection, based upon the substance of a self-curated portfolio, appears to be an effective and efficient means for roadmapping how gaps can be identified and ultimately addressed.

Postscript

In many settings/environments, Machine Learning and Data Science in general, are team sports. Clearly then, a viable way to address the challenges and opportunities presented by depth versus breadth is to hire accordingly – i.e., hire for depth and breadth in your organization.

Prob & Stats Gaps: Sprinting for Closure

Prob & Stats Gap

When it comes to the mathematical underpinnings for Deep Learning, I’m extremely passionate. In fact, my perspective can be summarized succinctly:

Deep Learning – Deep Math = Deep Gap.

In reflecting upon my own mathematical credentials for Deep Learning, when it came to probability and statistics, I previously stated:

Through a number of courses in Time Series Analysis (TSA), my background affords me an appreciation for prob & stats. In other words, I have enough context to appreciate this need, and through use of quality, targeted resources (e.g., Goodfellow et al.’s textbook), I can close out the gaps sufficiently – in my case, for example, Bayes’ Rule and information theory.

Teaching to Learn

DSC02681Although I can certainly leverage quality, targeted resources, I wanted to share here a complementary approach. One reason for doing this is that resources such as Goodfellow et al.’s textbook may not be readily accessible to everyone – in other words, some homework is required before some of us are ready to crack open this excellent resource, and make sense of the prob & stats summary provided there.

So, in the spirit of progressing towards being able to leverage appropriate references such as Goodfellow et al.’s textbook, please allow me to share here a much-more pragmatic suggestion:

Tutor a few high school students in prob & stats to learn prob & stats.

Just in case the basic premise of this suggestion isn’t evident, it is: By committing to teaching prob & stats, you must be able to understand prob & stats. And as an added bonus, this commitment of tutoring each of a few students (say) once a week, establishes and reinforces a habit – a habit that is quite likely, in this case, to ensure you stick with your objective to broaden and deepen your knowledge/skills when it comes to probability and statistics.

As an added bonus, this is a service for which you could charge a fee – full rate for tutoring math at the high-school level to gratis, depending upon the value you’ll be able to offer your students … of course, a rate you could adjust over time, as your expertise with prob & stats develops.

Agile Sprints

Over recent years, I’ve found it particularly useful to frame initiatives such as this one in the form of Agile Sprints – an approach I’ve adopted and adapted from the pioneering efforts of J D Meier. To try this for yourself, I suggest the following two-step procedure:

  1. Review JD’s blog post on sprints – there’s also an earlier post of his that is both useful and relevant.
  2. Apply the annotated template I’ve prepared here to a sprint of your choosing. Because the sample template I’ve shared is specific to the prob & stats example I’ve been focused on in this post, I’ve also included a blank version of the sprint template here.

4DXit

Before you go, there’s one final point I’d like to draw your attention to – and that’s lead and lag measures. Whereas lag measures focus on your (wildly) important goal (WIG), lead measures emphasize those behaviors that’ll get you there. To draw from the example I shared for addressing a math gap in prob & stats, the lag measure is:

MUST have enhanced my knowledge/skills in the area of prob & stats such that I am better prepared to review Deep Learning staples such as Goodfellow et al.’s textbook

In contrast, examples of lead measures are each of the following:

SHOULD have sought tutoring positions with local and/or online services

COULD have acquired the textbook relevant for high-school level prob & stats

With appropriately crafted lead measures then, the likelihood that your WIG will be achieved is significantly enhanced. Kudos to Cal Newport for emphasizing the importance of acting on lead measures in his Deep Work book. For all four disciplines of execution, you can have a closer look at Newport’s book, or go to the 4DX source – the book or by simply Googling for resources on “the 4 disciplines of execution”.

Of course, the approach described here can be applied to much more than a gap in your knowledge/skills of prob & stats. And as I continue the process of self-curating my Data Science Portfolio, I expect to unearth additional challenges and opportunities – challenges and opportunities that can be well approached through 4DX’d Agile Sprints.

Recent Workshop: Nurturing Quantitative Skills for the Physical Sciences through use of Scientific Models

unst_water_oilA few weeks back, I delivered a workshop at a conference focused on online learning. Unfortunately, abstracts were not made available via the event’s web site. In lieu of directing you elsewhere then, below is the abstract I submitted:

Nurturing Quantitative Skills for the Physical Sciences through use of Scientific Models

L. I. Lumb
Division of Natural Science, Faculty of Science, York University
Toronto, Ontario, Canada

With numerous scientists asserting that we have entered into The Anthropocene, a ‘brand new’ Geologic Epoch that underscores human impact on planet Earth, there has arguably never been a more relevant time for literacy in the physical sciences. Complicating this, however, is the implied need for quantitative skills demanded of those who seek to have more than a superficial degree of literacy in matters relating to climate or global change. Grounded by direct, personal experience in teaching science to non-scientists at the undergraduate university level, and independently validated by academic research into Science Technology Engineering Math (STEM) related programs and subjects, mastery of even the most-basic quantitative skills presents a well-established challenge in engaging learners at levels beyond the quantitatively superficial – a challenge that appears to be increasingly the case with the arriving cohort of undergraduates each Fall. In an effort to systematically develop and encourage proficiency in quantitative skills in data-rich courses in the physical sciences, a number of scientific models have been introduced by the author. Ranging from embarrassingly simple physical models using rice to investigate relative humidity in Earth’s atmosphere, to software-based models that employ spreadsheets to elucidate aspects of climate and global change, the use of scientific models presents intriguing challenges and opportunities for both instructors and students; needless to state, these challenges and opportunities can be significantly exacerbated in courses that are delivered online to numbers in excess of 100 students. After an introduction of scientific models as a pedagogical vehicle for nurturing quantitative skills, emphasis shifts to the sharing of real-world experiences with this approach in relatively large, online courses in physical sciences taught at the undergraduate level to non-majors (and therefore non-scientists). In ultimately working towards the primary example of a relatively simple, yet scientifically appropriate spreadsheet model for the Paris Climate Agreement, participants’ involvement will be scaffolded through use of other examples of models that have also been used in practice. Participants will also be encouraged to engage in a dialogue that compares and contrasts these models with more traditional approaches (e.g., formal essays). Finally, armed with some context for models as a pedagogical vehicle for quantitatively enhancing student engagement, participants will be guided through exercises that will allow them to develop their own models for their own teaching and learning requirements – whether their interests fall within or beyond scientifically oriented disciplines.

As you can see, I have a vested interest in nurturing quantitative skills, and models is one of the vehicles I make use of. If you share similar interests or better yet, if you have ideas as to what’s worked for you, please feel free to comment.

Guest Post: Four Tips for Taking Great Cloud Photos

Kevin Li took NATS 1780 two years ago. In addition to maintaining an interest in weather and climate, Kevin remains an accomplished and enthusiastic photographer. I asked Kevin if he might have a few cloud-photo tips to share with the students currently taking NATS 1780 at Toronto’s York University. Here’s his response:

Four Tips for Taking Great Cloud Photos:

  • It starts with composition of the photo (what you include in your photo, mostly clouds with some landscape or just clouds and the sky?) good composition will show us location, approximate time of the day, and weather conditions (which could explain why the shape of the clouds are the way they are)
  • Head out in the early morning around sunrise and around sunset. This will add some warm colours to your photos especially around sunset. You will notice that the clouds are more visible and distinct in those times of the day rather than mid-day
  • Focusing of the camera will be crucial and will depend on your camera. The focus should be placed on the cloud you want to photograph. This allows the camera to adjust the lighting to avoid over exposure and or under exposure
  • Lastly, if you are using a smartphone, your phone might have a feature that will boost the colour saturation levels. This feature will make some if your photos pop! For those with dslrs and point and shoot cameras, this can be done in post-production or maybe in-camera depending on the camera you have.

It’s not about the camera, but the person who is behind the camera! 🙂

Note for DSLR users only: A circular polarizer will help on those bright sunny days. If you don’t have one, use a high shutter speed or decrease the aperature size to f8 or smaller.

Many thanks to Kevin for sharing this excellent advice!

If you have additional tips to share, please feel free to add a comment. If you have a question, I’m sure I can persuade Kevin to answer it.

Current Events in the Classroom: Experiments on Mars-Like Clouds Stimulate the Learning Process

Everyone has an appreciation for humidity and clouds … However, when you seek to understand humidity and clouds from the scientific perspective, `things get technical‘ in a hurry! As someone who attempts to share science with non-scientists, it’s wonderful to be able to work current events into the (physical/virtual) classroom. Some recent experimental results, aimed at simulating Martian-style clouds, allow for a highly topical teachable moment.

For the details, please see below my recent post (via Moodle) to my Weather and Climate class at Toronto’s York University:

Image

Now, if only I could have such a cloud chamber in the (virtual) classroom …

Pencasting During Lectures in Large Venues

In a recent post on pencasting as a way of teaching/learning weather and climate, I stated:

Monday (October 1, 2012), I intend to use a pencast during my lecture – to introduce aspects of the stability of Earth’s atmosphere. I’ll try to share here how it went. For this intended use of the pencast, I will use a landscape mode for presentation – as I expect that’ll work well in the large lecture hall I teach in. I am, however, a little concerned that the lines I’ll be drawing will be a little too thin/faint for the students at the back of the lecture theatre to see …

I followed through as advertized (above) earlier today.

Image

My preliminary findings are as follows:

  • The visual aspects of the pencast are quite acceptable – This is true even in large lecture halls such as the 500-seat Price Family Cinema at York University (pictured above) in Toronto, Canada where I am currently teaching. I used landscape mode for today’s pencast, and zoomed it in a little. A slightly thicker pen option would be wonderful for such situations … as would different pen colours (the default is green).
  • The audio quality of the pencasts is very good to excellent – Although my Livescribe pen came with a headset/microphone, I don’t use it. I simply use the built-in microphone on the pen, and speak normally when I am developing pencasts. Of course, the audio capabilities of the lecture hall I teach in are most excellent for playback!
  • One-to-many live streaming of pencasts works well – I streamed live directly from myLivescibe today. I believe the application infrastructure is based largely on Adobe Flash and various Web services delivered by Web Objects. Regardless of the technical underpinnings, live streaming worked well. Of course, I could’ve developed a completely self-contained PDF file, downloaded this, and run the pencast locally using Adobe Reader.
  • Personal pencasting works well – I noticed that a number of students were streaming the pencast live for themselves during the lecture. In so doing, they could control interaction with the pencast.

Anecdotally, a few students mentioned that they appreciated the pencast during the break period – my class meets once per for a three-hour session.

Although I’ve yet to hear this feedback directly from the students, I believe I need to:

  • Decrease the duration of pencasts – Today’s lasts about 10 minutes
  • Employ a less-is-more approach/strategy – My pencasts are fairly involved when done …
  • Experiment with the right balance of speaking to penning (is that even a word!?) – Probably a less-is-more approach/strategy would work well here for both the penned and spoken word …

Finally, today’s pencast on the basics of atmospheric stability:

  • Previous approach – Project an illustration taken directly from the course’s text. This is a professionally produced, visually appealing, detailed, end-result, static diagram that I embedded in my presentation software (I use Google Docs for a number of reasons.) Using a laser pointer, my pedagogy called for a systematic deconstruction this diagram – hoping that the students would be engaged enough to actually follow me. Of course, in the captured versions of my lectures, the students don’t actually see where I’m directing the laser pointer. The students have access to the course text and my lecture slides. I have no idea if/how they attempt to ingest and learn from this approach.
  • Pencasting – As discussed elsewhere, the starting point is a blank slate. Using the pencasting technology, I sketch my own rendition of the illustration from the text. As I build up the details, I explain the concept of stability analyses. Because the sketch appears as I speak, the students have the potential to follow me quite closely – and if they miss anything, they can review the pencast after class at their own pace. The end result of a pencast is a sketch that doesn’t hold a candle to the professionally produced illustration provided in the text and my lecture notes. However, to evaluate the pencast as merely a final product, I believe, misses the point completely. Why? I believe the pencast is a far superior way to teach and to learn in situations such as this one. Why? I believe the pencast allows the teacher to focus on communication – communication that the learner can also choose to be highly receptive to, and engaged by.

I still regard myself as very much a neophyte in this arena. However, as the above final paragraphs indicate, pencasting is a disruptive innovation whose value in teaching/learning merits further investigation.

Teaching/Learning Weather and Climate via Pencasting

I first heard about it a few years ago, and thought it sounded interesting … and then, this past Summer, I did a little more research and decided to purchase a Livescribe 8 GB Echo(TM) Pro Pack. Over the Summer, I took notes with the pen from time-to-time and found it to be somewhat useful/interesting.

Just this week, however, I decided it was time to use the pen for the originally intended purpose: Making pencasts for the course I’m currently teaching in weather and climate at Toronto’s York University. Before I share some sample pencasts, please allow me to share my findings based on less than a week’s worth of `experience’:

  • Decent-quality pencasts can be produced with minimal effort – I figured out the basics (e.g., how to record my voice) in a few minutes, and started on my first pencast. Transferring the pencast from the pen to the desktop software to the Web (where it can be shared with my students) also requires minimal effort. “Decent quality” here refers to both the visual and audio elements. The fact that this is both a very natural (writing with a pen while speaking!) and speedy (efficient/effective) undertaking means that I am predisposed towards actually using the technology whenever it makes sense – more on that below. Net-net: This solution is teacher-friendly.
  • Pencasts compliment other instructional media – This is my current perspective … Pencasts compliment the textbook readings I assign, the lecture slides plus video/audio captures I provide, the Web sites we all share, the Moodle discussion forums we engage in, the Tweets I issue, etc. In the spirit of blended learning it is my hope that pencasts, in concert with these other instructional media, will allow my TAs and I to `reach’ most of the students in the course.
  • Pencasts allow the teacher to address both content and skills-oriented objectives – Up to this point, my pencasts have started from a blank page. This forces me to be focused, and systematically develop towards some desired content (e.g., conceptually introducing the phase diagram for H2O) and/or skills (e.g., how to calculate the slope of a line on a graph) oriented outcome. Because students can follow along, they have the opportunity to be fully engaged as the pencast progresses. Of course, what this also means is that this technology can be as effective in the first-year university level course I’m currently teaching, but also at the academic levels that precede (e.g., grade school, high school, etc.) and follow (senior undergraduate and graduate) this level.
  • Pencasts are learner-centric – In addition to be teacher-friendly, pencasts are learner-centric. Although a student could passively watch and listen to a pencast as it plays out in a linear, sequential fashion, the technology almost begs you to interact with it. As noted previously, this means a student can easily replay some aspect of the pencast that they missed. Even more interestingly, however, students can interact with pencasts in a random-access mode – a mode that would almost certainly be useful when they are attempting to apply the content/skills conveyed through the pencast to a tutorial or assignment they are working on, or a quiz or exam they are actively studying for. It is important to note that both the visual and audio elements of the pencast can be manipulated with impressive responsiveness to random-access input from the student.
  • I’m striving for authentic, not perfect pencasts – With a little more practice and some planning/scripting, I’d be willing to bet that I could produce an extremely polished pencast. Based on past experience teaching today’s first-year university students, I’m fairly convinced that this is something they couldn’t care less about. Let’s face it, my in-person lectures aren’t perfectly polished, and neither are my pencasts. Because I can easily go back to existing pencasts and add to them, I don’t need to fret too much about being perfect the first time. Too much time spent fussing here would diminish the natural and speedy aspects of the technology.

Findings aside, on to samples:

  • Calculating the lapse rate for Earth’s troposphere – This is a largely a skills-oriented example. It was my first pencast. I returned twice to the original pencast to make changes – once to correct a spelling mistake, and the second time to add in a bracket (“Run”) that I forgot. I communicated these changes to the students in the course via an updated link shared through a Moodle forum dedicated to pencasts. If you were to experience the updates, you’d almost be unaware of the lapse of time between the original pencast and the updates, as all of this is presented seamlessly as a single pencast to the students.
  • Introducing the pressure-temperature phase diagram for H2O – This is largely a content-oriented example. I got a little carried away in this one, and ended up packing in a little too much – the pencast is fairly long, and by the time I’m finished, the visual element is … a tad on the busy side. Experience gained.

Anecdotally, initial reaction from the students has been positive. Time will tell.

Next steps:

  • Monday (October 1, 2012), I intend to use a pencast during my lecture – to introduce aspects of the stability of Earth’s atmosphere. I’ll try to share here how it went. For this intended use of the pencast, I will use a landscape mode for presentation – as I expect that’ll work well in the large lecture hall I teach in. I am, however, a little concerned that the lines I’ll be drawing will be a little too thin/faint for the students at the back of the lecture theatre to see …
  • I have two sections of the NATS 1780 Weather and Climate course to teach this year. One section is taught the traditional way – almost 350 students in a large lecture theatre, 25-student tutorial groups, supported by Moodle, etc. In striking contrast to the approach taken in the meatspace section, is the second section where almost everything takes place online via Moodle. Although I have yet to support this hypothesis with any data, it is my belief that these pencasts are an excellent way to reach out to the students in the Internet-only section of the course. More on this over the fullness of time (i.e., the current academic session.)

Feel free to comment on this post or share your own experiences with pencasts.

Triple and Quadruple Rainbows: Theory Meets Practice

Last Fall 2010/Winter 2011, I taught the science of weather and climate to non-scientists at Toronto’s York University.

During the Fall semester, a unit of NATS 1780 focused on atmospheric optics. Not surprisingly, rainbows were one of the topics that received attention.

By the end of this unit, students understood that rainbows are the consequence of a twofold optical manipulation of sunlight:

  • Raindrops bend sunlight.  Not only do raindrops bend (refract) sunlight, they do so with extreme prejudice. Blue light gets bent the most, red the least. In other words, this is a wavelength-based prejudice: The shorter the wavelength, the more the light is bent. This highly selective refraction is known as dispersion. Like a prism then, raindrops allow for the individual colours that comprise visible light to be made evident.
  • Raindrops reflect sunlight.  Inside the raindrop, reflection occurs. In fact, multiple reflections can occur. And if all of the angles are just right, these reflections can remain contained within the raindrop. This is known as the phenomenon of Total Internal Reflection (TIR).
The combined effect of bending and internally reflecting is best understood with a diagram. Note in this Wikipedia diagram that sunlight interacts with the air/raindrop boundary upon entry, gets reflected internally once, and then again interacts with the raindrop/air boundary upon exit from the raindrop. Taken together, the result is a single rainbow.

How are double rainbows produced? By increasing the number of internal reflections to two.

Single and double rainbows are relatively easily observed.

On the Fall 2010 Exam in NATS 1780, I included the question:
If it were possible, how would a tertiary (i.e., third)
rainbow be produced?
A number of students correctly answered that three reflections internal to the raindrop would be required to produce such a phenomenon.

Although I had intended this to be a question of theoretical merit only, I was delighted to learn that both triple and quadruple rainbows have been observed – in other words, they are no longer just a theoretical possibility. (Quadruple rainbows would require four internal reflections.)

Alas, I’ve only ever been able to capture single and double rainbows … My personal quest for the more elusive triple and quadruple rainbows continues …