Pencasting with a Wacom tablet: Time to revisit this option

Around the start of the Fall term in September 2014, I found myself in a bit of a bind: My level of frustration with Livescribe pencasting had peaked, and was I desperately seeking alternatives. To be clear, it was changes to the Livescribe platform that were the source of this frustration, rather than pencasting as a means for visual communication. In fact, if anything, a positive aspect of the Livescribe experience was that I was indeed SOLD on pencasting as an extremely effective means for communicating visually – an approach that delivered significant value in instructional settings such as the large classes I was teaching at the university level.

In an attempt to make use of an alternative to the Livescribe platform then, I discovered and acquired a small Wacom tablet. Whereas I rapidly became proficient in use of the Livescribe Echo smartpen, because it was truly like making use of a regular pen, my own learning curve with the Wacom solution was considerably steeper.

To be concrete, you can view on Youtube a relatively early attempt. As one viewer commented:

Probably should practice the lecture. Too many pauses um er ah.

Honestly, that was more a reflection of my grasp of the Wacom platform than my expertise with the content I was attempting to convey through this real-time screen capture. In other words, my comfort level with this technology was so low that I was distracted by it. Given that many, many thousands of visual (art) professionals make use of this or similar solutions from Wacom, I’m more that willing to admit that this one was ‘on me’ – I wasn’t ‘a natural’.

With the Wacom solution, you need to train your eyes to be fixed on your screen, while your hand writes/draws/etc. on the tablet. Not exactly known for my hand-eye coordination in general, it’s evident that I struggled with this technology. As I look at the results some four years later, I’m not quite as dismayed as I expected to be. My penmanship isn’t all that bad – even though I still find writing and drawing with this tablet to be a taxing exercise in humility. In hindsight, I’m also fairly pleased with the Wacom tablet’s ability to permit use of colour, as well as lines of different thicknesses. This flexibility, completely out of scope in the solution from Livescribe, introduces a whole next level of prospects for visual communication.

Knowing that others have mastered the Wacom platform, and having some personal indication of its potential to produce useful results, I’m left with the idea of giving this approach another try – soon. I’ll let you know how it goes.

Revisiting the Estimation of Fractal Dimension for Image Classification

Classification is a well-established use case for Machine Learning. Though textbook examples abound, standard examples include the classification of email into ham versus spam, or images of cats versus dogs.

Circa 1994, I was unaware of Machine Learning, but I did have a use case for quantitative image classification. I expect you’re familiar with those brave souls known as The Hurricane Hunters – brave because they explicitly seek to locate the eyes of hurricanes using an appropriately tricked out, military-grade aircraft. Well, these hunters aren’t the only brave souls when it comes to chasing down storms in the pursuit of atmospheric science. In an effort to better understand Atlantic storms (i.e., East Coast, North America), a few observational campaigns featured aircraft flying through blizzards at various times during Canadian winters.

In addition to standard instrumentation for atmospheric and navigational observables, these planes were tricked out in an exceptional way:

For about two-and-a-half decades, Knollenberg-type [ref 4] optical array probes have been used to render in-situ digital images of hydrometeors. Such hydrometeors are represented as a two-dimensional matrix, whose individual elements depend on the intensity of transmitted light, as these hydrometeors pass across a linear optical array of photodiodes. [ref 5]

In other words, the planes were equipped with underwing optical sensors that had the capacity to obtain in-flight images of

hydrometeor type, e.g. plates, stellar crystals, columns, spatial dendrites, capped columns, graupel, and raindrops. [refs 1,7]

(Please see the original paper for the references alluded to here.)

Even though this is hardly a problem in Big Data, a single flight might produce tens to hundreds to thousands of hydrometeor images that needed to be manually classified by atmospheric scientists. Working for a boutique consultancy focused on atmospheric science, and having excellent relationships with Environment Canada scientists who make Cloud Physics their express passion, an opportunity to automate the classification of hydrometeors presented itself.

Around this same time, I became aware of fractal geometrya visually arresting and quantitative description of nature popularized by proponents such as Benoit Mandlebrot. Whereas simple objects (e.g., lines, planes, cubes) can be associated with an integer dimension (e.g., 1, 2 and 3, respectively), objects in nature (e.g., a coastline, a cloud outline) can be better characterized by a fractional dimension – a real-valued fractal dimension that lies between the integer value for a line (i.e., 1) and the two-dimensional (i.e., 2) value for a plane.

Armed with an approach for estimating fractal dimension then, my colleagues and I sought to classify hydrometeors based on their subtle to significant geometrical expressions. Although the idea was appealing in principle, the outcome on a per-hydrometeor basis was a single, scalar result that attempted to capture geometrical uniqueness. In isolation, this approach was simply not enough to deliver an automated scheme for quantitatively classifying hydrometeors.

I well recall some of the friendly conversations I had with my scientific and engineering peers who attended the conference at Montreal’s Ecole Polytechnique. Essentially, the advice I was given, was to regard the work I’d done as a single dimension of the hydrometeor classification problem. What I really needed to do was develop additional dimensions for classifying hydrometeors. With enough dimensions then, the resulting multidimensional classification scheme would be likely to have a much-better chance of delivering the automated solution sought by the atmospheric scientists.

In my research, fractal dimensions were estimated using various algorithms; they were not learned. However, they could be – as is clear from the efforts of others (e.g., the prediction of fractal dimension via Machine Learning). And though my pursuit of such a suggestion will have to wait for a subsequent research effort, a learned approach might allow for the introduction of much more of a multidimensional scheme for quantitative classification of hydrometeors via Machine Learning. Of course, from the hindsight of 2018, there are a number possibilities for quantitative classification via Machine Learning – possibilities that I fully expect would result in more useful outcomes.

Whereas fractals don’t receive as much attention these days as they once did, and certainly not anything close to the deserved hype that seems to pervade most discussions of Machine Learning, there may still be some value in incorporating their ability to quantify geometry into algorithms for Machine Learning. From a very different perspective, it might be interesting to see if the architecture of deep neural networks can be characterized through an estimation of their fractal dimension – if only to tease out geometrical similarities that might be otherwise completely obscured.

While I, or (hopefully) others, ponder such thoughts, there is no denying the stunning expression of the fractal geometry of nature that fractals have rendered visual.

Recent Workshop: Nurturing Quantitative Skills for the Physical Sciences through use of Scientific Models

unst_water_oilA few weeks back, I delivered a workshop at a conference focused on online learning. Unfortunately, abstracts were not made available via the event’s web site. In lieu of directing you elsewhere then, below is the abstract I submitted:

Nurturing Quantitative Skills for the Physical Sciences through use of Scientific Models

L. I. Lumb
Division of Natural Science, Faculty of Science, York University
Toronto, Ontario, Canada

With numerous scientists asserting that we have entered into The Anthropocene, a ‘brand new’ Geologic Epoch that underscores human impact on planet Earth, there has arguably never been a more relevant time for literacy in the physical sciences. Complicating this, however, is the implied need for quantitative skills demanded of those who seek to have more than a superficial degree of literacy in matters relating to climate or global change. Grounded by direct, personal experience in teaching science to non-scientists at the undergraduate university level, and independently validated by academic research into Science Technology Engineering Math (STEM) related programs and subjects, mastery of even the most-basic quantitative skills presents a well-established challenge in engaging learners at levels beyond the quantitatively superficial – a challenge that appears to be increasingly the case with the arriving cohort of undergraduates each Fall. In an effort to systematically develop and encourage proficiency in quantitative skills in data-rich courses in the physical sciences, a number of scientific models have been introduced by the author. Ranging from embarrassingly simple physical models using rice to investigate relative humidity in Earth’s atmosphere, to software-based models that employ spreadsheets to elucidate aspects of climate and global change, the use of scientific models presents intriguing challenges and opportunities for both instructors and students; needless to state, these challenges and opportunities can be significantly exacerbated in courses that are delivered online to numbers in excess of 100 students. After an introduction of scientific models as a pedagogical vehicle for nurturing quantitative skills, emphasis shifts to the sharing of real-world experiences with this approach in relatively large, online courses in physical sciences taught at the undergraduate level to non-majors (and therefore non-scientists). In ultimately working towards the primary example of a relatively simple, yet scientifically appropriate spreadsheet model for the Paris Climate Agreement, participants’ involvement will be scaffolded through use of other examples of models that have also been used in practice. Participants will also be encouraged to engage in a dialogue that compares and contrasts these models with more traditional approaches (e.g., formal essays). Finally, armed with some context for models as a pedagogical vehicle for quantitatively enhancing student engagement, participants will be guided through exercises that will allow them to develop their own models for their own teaching and learning requirements – whether their interests fall within or beyond scientifically oriented disciplines.

As you can see, I have a vested interest in nurturing quantitative skills, and models is one of the vehicles I make use of. If you share similar interests or better yet, if you have ideas as to what’s worked for you, please feel free to comment.

Guest Post: Four Tips for Taking Great Cloud Photos

Kevin Li took NATS 1780 two years ago. In addition to maintaining an interest in weather and climate, Kevin remains an accomplished and enthusiastic photographer. I asked Kevin if he might have a few cloud-photo tips to share with the students currently taking NATS 1780 at Toronto’s York University. Here’s his response:

Four Tips for Taking Great Cloud Photos:

  • It starts with composition of the photo (what you include in your photo, mostly clouds with some landscape or just clouds and the sky?) good composition will show us location, approximate time of the day, and weather conditions (which could explain why the shape of the clouds are the way they are)
  • Head out in the early morning around sunrise and around sunset. This will add some warm colours to your photos especially around sunset. You will notice that the clouds are more visible and distinct in those times of the day rather than mid-day
  • Focusing of the camera will be crucial and will depend on your camera. The focus should be placed on the cloud you want to photograph. This allows the camera to adjust the lighting to avoid over exposure and or under exposure
  • Lastly, if you are using a smartphone, your phone might have a feature that will boost the colour saturation levels. This feature will make some if your photos pop! For those with dslrs and point and shoot cameras, this can be done in post-production or maybe in-camera depending on the camera you have.

It’s not about the camera, but the person who is behind the camera! 🙂

Note for DSLR users only: A circular polarizer will help on those bright sunny days. If you don’t have one, use a high shutter speed or decrease the aperature size to f8 or smaller.

Many thanks to Kevin for sharing this excellent advice!

If you have additional tips to share, please feel free to add a comment. If you have a question, I’m sure I can persuade Kevin to answer it.

Current Events in the Classroom: Experiments on Mars-Like Clouds Stimulate the Learning Process

Everyone has an appreciation for humidity and clouds … However, when you seek to understand humidity and clouds from the scientific perspective, `things get technical‘ in a hurry! As someone who attempts to share science with non-scientists, it’s wonderful to be able to work current events into the (physical/virtual) classroom. Some recent experimental results, aimed at simulating Martian-style clouds, allow for a highly topical teachable moment.

For the details, please see below my recent post (via Moodle) to my Weather and Climate class at Toronto’s York University:

Image

Now, if only I could have such a cloud chamber in the (virtual) classroom …

Teaching/Learning Weather and Climate via Pencasting

I first heard about it a few years ago, and thought it sounded interesting … and then, this past Summer, I did a little more research and decided to purchase a Livescribe 8 GB Echo(TM) Pro Pack. Over the Summer, I took notes with the pen from time-to-time and found it to be somewhat useful/interesting.

Just this week, however, I decided it was time to use the pen for the originally intended purpose: Making pencasts for the course I’m currently teaching in weather and climate at Toronto’s York University. Before I share some sample pencasts, please allow me to share my findings based on less than a week’s worth of `experience’:

  • Decent-quality pencasts can be produced with minimal effort – I figured out the basics (e.g., how to record my voice) in a few minutes, and started on my first pencast. Transferring the pencast from the pen to the desktop software to the Web (where it can be shared with my students) also requires minimal effort. “Decent quality” here refers to both the visual and audio elements. The fact that this is both a very natural (writing with a pen while speaking!) and speedy (efficient/effective) undertaking means that I am predisposed towards actually using the technology whenever it makes sense – more on that below. Net-net: This solution is teacher-friendly.
  • Pencasts compliment other instructional media – This is my current perspective … Pencasts compliment the textbook readings I assign, the lecture slides plus video/audio captures I provide, the Web sites we all share, the Moodle discussion forums we engage in, the Tweets I issue, etc. In the spirit of blended learning it is my hope that pencasts, in concert with these other instructional media, will allow my TAs and I to `reach’ most of the students in the course.
  • Pencasts allow the teacher to address both content and skills-oriented objectives – Up to this point, my pencasts have started from a blank page. This forces me to be focused, and systematically develop towards some desired content (e.g., conceptually introducing the phase diagram for H2O) and/or skills (e.g., how to calculate the slope of a line on a graph) oriented outcome. Because students can follow along, they have the opportunity to be fully engaged as the pencast progresses. Of course, what this also means is that this technology can be as effective in the first-year university level course I’m currently teaching, but also at the academic levels that precede (e.g., grade school, high school, etc.) and follow (senior undergraduate and graduate) this level.
  • Pencasts are learner-centric – In addition to be teacher-friendly, pencasts are learner-centric. Although a student could passively watch and listen to a pencast as it plays out in a linear, sequential fashion, the technology almost begs you to interact with it. As noted previously, this means a student can easily replay some aspect of the pencast that they missed. Even more interestingly, however, students can interact with pencasts in a random-access mode – a mode that would almost certainly be useful when they are attempting to apply the content/skills conveyed through the pencast to a tutorial or assignment they are working on, or a quiz or exam they are actively studying for. It is important to note that both the visual and audio elements of the pencast can be manipulated with impressive responsiveness to random-access input from the student.
  • I’m striving for authentic, not perfect pencasts – With a little more practice and some planning/scripting, I’d be willing to bet that I could produce an extremely polished pencast. Based on past experience teaching today’s first-year university students, I’m fairly convinced that this is something they couldn’t care less about. Let’s face it, my in-person lectures aren’t perfectly polished, and neither are my pencasts. Because I can easily go back to existing pencasts and add to them, I don’t need to fret too much about being perfect the first time. Too much time spent fussing here would diminish the natural and speedy aspects of the technology.

Findings aside, on to samples:

  • Calculating the lapse rate for Earth’s troposphere – This is a largely a skills-oriented example. It was my first pencast. I returned twice to the original pencast to make changes – once to correct a spelling mistake, and the second time to add in a bracket (“Run”) that I forgot. I communicated these changes to the students in the course via an updated link shared through a Moodle forum dedicated to pencasts. If you were to experience the updates, you’d almost be unaware of the lapse of time between the original pencast and the updates, as all of this is presented seamlessly as a single pencast to the students.
  • Introducing the pressure-temperature phase diagram for H2O – This is largely a content-oriented example. I got a little carried away in this one, and ended up packing in a little too much – the pencast is fairly long, and by the time I’m finished, the visual element is … a tad on the busy side. Experience gained.

Anecdotally, initial reaction from the students has been positive. Time will tell.

Next steps:

  • Monday (October 1, 2012), I intend to use a pencast during my lecture – to introduce aspects of the stability of Earth’s atmosphere. I’ll try to share here how it went. For this intended use of the pencast, I will use a landscape mode for presentation – as I expect that’ll work well in the large lecture hall I teach in. I am, however, a little concerned that the lines I’ll be drawing will be a little too thin/faint for the students at the back of the lecture theatre to see …
  • I have two sections of the NATS 1780 Weather and Climate course to teach this year. One section is taught the traditional way – almost 350 students in a large lecture theatre, 25-student tutorial groups, supported by Moodle, etc. In striking contrast to the approach taken in the meatspace section, is the second section where almost everything takes place online via Moodle. Although I have yet to support this hypothesis with any data, it is my belief that these pencasts are an excellent way to reach out to the students in the Internet-only section of the course. More on this over the fullness of time (i.e., the current academic session.)

Feel free to comment on this post or share your own experiences with pencasts.

Synthetic Life and Evolution of Earth’s Second Atmosphere

I have the pleasure of teaching the science of weather and climate to non-scientists again this Fall/Winter session at Toronto’s York University. In the Fall 2011 Term, time was spent discussing the origin and evolution of Earth’s atmosphere. What follows is a post I just shared with the class via Moodle (our LMS):
Photosynthesizing anaerobic lifeforms in Earth’s oceans were likely responsible for systematically enriching Earth’s atmosphere with respect to O2. Through chemical reactions in Earth’s atmosphere, O3 and the O3 layer were systematically derived from this same source of O2. The O3 layer’s ability to minimize the impact of harmful UV radiation, in tandem with the ascent of [O2] to current values of about 21% by volume, were and remain crucial to life as we experience it today.

In tracing the evolution of Earth’s second atmosphere from a composition based on volcanic outgassing to its present state, the role of life was absolutely critical.

On my drive home tonight after today’s lecture, I happened upon a broadcast regarding synthetic life on CBC Radio‘s Ideas. Based upon annotated excerpts from a Craig Venter lecture, this broadcast is well worth the listen in and of itself. And although I’m no life scientist, I can’t help but predict that Venter’s work will ultimately lead to refinements, if not a complete rewrite, of life’s role in the evolution of Earth’s second atmosphere.
If you have any thoughts on this prediction, please feel free to share them here via a comment.

Triple and Quadruple Rainbows: Theory Meets Practice

Last Fall 2010/Winter 2011, I taught the science of weather and climate to non-scientists at Toronto’s York University.

During the Fall semester, a unit of NATS 1780 focused on atmospheric optics. Not surprisingly, rainbows were one of the topics that received attention.

By the end of this unit, students understood that rainbows are the consequence of a twofold optical manipulation of sunlight:

  • Raindrops bend sunlight.  Not only do raindrops bend (refract) sunlight, they do so with extreme prejudice. Blue light gets bent the most, red the least. In other words, this is a wavelength-based prejudice: The shorter the wavelength, the more the light is bent. This highly selective refraction is known as dispersion. Like a prism then, raindrops allow for the individual colours that comprise visible light to be made evident.
  • Raindrops reflect sunlight.  Inside the raindrop, reflection occurs. In fact, multiple reflections can occur. And if all of the angles are just right, these reflections can remain contained within the raindrop. This is known as the phenomenon of Total Internal Reflection (TIR).
The combined effect of bending and internally reflecting is best understood with a diagram. Note in this Wikipedia diagram that sunlight interacts with the air/raindrop boundary upon entry, gets reflected internally once, and then again interacts with the raindrop/air boundary upon exit from the raindrop. Taken together, the result is a single rainbow.

How are double rainbows produced? By increasing the number of internal reflections to two.

Single and double rainbows are relatively easily observed.

On the Fall 2010 Exam in NATS 1780, I included the question:
If it were possible, how would a tertiary (i.e., third)
rainbow be produced?
A number of students correctly answered that three reflections internal to the raindrop would be required to produce such a phenomenon.

Although I had intended this to be a question of theoretical merit only, I was delighted to learn that both triple and quadruple rainbows have been observed – in other words, they are no longer just a theoretical possibility. (Quadruple rainbows would require four internal reflections.)

Alas, I’ve only ever been able to capture single and double rainbows … My personal quest for the more elusive triple and quadruple rainbows continues …

Multi-Touch Computational Steering

About 1:35 into

Jeff Han impressively demonstrates a lava-lamp application on a multi-touch user interface.

Having spent considerable time in the past pondering the fluid dynamics (e.g., convection) of the Earth’s atmosphere and deep interior (i.e., mantle and core), Han’s demonstration immediately triggered a scientific use case: Is it possible to computationally steer scientific simulations via multi-touch user interfaces?

A quick search via Google returns almost 20,000 hits … In other words, I’m likely not the first to make this connection 😦

In my copious spare time, I plan to investigate further …

Also of note is how this connection was made: A friend sent me a link to an article on Apple’s anticipated tablet product. Since so much of the anticipation of the Apple offering relates to the user interface, it’s not surprising that reference was made to Jeff Han’s TED talk (the video above). Cool.

If you have any thoughts to share on multi-touch computational steering, please feel free to chime in.

One more thought … I would imagine that the gaming industry would be quite interested in such a capability – if it isn’t already!

Net@EDU 2008: Key Takeaways

Earlier this week, I participated in the Net@EDU Annual Meeting 2008: The Next 10 Years.   For me, the key takeaways are:

  • The Internet can be improved. IP, its transport protocols (RTP, SIP, TCP and UDP), and especially HTTP, are stifling innovation at the edges – everything (device-oriented) on IP and everything (application-oriented) on the Web. There are a number of initiatives that seek to improve the situation. One of these, with tangible outcomes, is the Stanford Clean Slate Internet Design Program.
  • Researchers and IT organizations need to be reunited. In the 1970s and 1980s, these demographics worked closely together and delivered a number of significant outcomes. Beyond the 1990s, these group remain separate and distinct. This separation has not benefited either group. As the manager of a team focused on operation of a campus network who still manages to conduct a modest amount of research, this takeaway resonates particularly strongly with me. 
  • DNSSEC is worth investigating now. DNS is a mission-critical service. It is often, however, an orphaned service in many IT organizations. DNSSEC is comprised of four standards that extend the original concept in security-savvy ways – e.g., they will harden your DNS infrastructure against DNS-targeted attacks. Although production implementation remains a future, the time is now to get involved.
  • The US is lagging behind in the case of broadband. An EDUCAUSE blueprint details the current situation, and offers a prescription for rectifying it. As a Canadian, it is noteworthy that Canada’s progress in this area is exceptional, even though it is regarded as a much-more rural nation than the US. The key to the Canadian success, and a key component of the blueprint’s prescription, is the funding model that shares costs evenly between two levels of government (federal and provincial) as well as the network builder/owner. 
  • Provisioning communications infrastructures for emergency situations is a sobering task. Virginia Tech experienced 100-3000% increases emergency-communications-panel-netedu-021008_2004.png in the demands on their communications infrastructure as a consequence of their April 16, 2007 event. Such stress factors are exceedingly difficult to estimate and account for. In some cases, responding in real time allowed for adequate provisioning through a tremendous amount of collaboration. Mass notification remains a challenge. 
  • Today’s and tomorrow’s students are different from yesterday’s. Although this may sound obvious, the details are interesting. Ultimately, this difference derives from the fact that today’s and tomorrow’s students have more intimately integrated technology into their lives from a very young age.
  • Cyberinfrastructure remains a focus. EDUCAUSE has a Campus Cyberinfrastructure Working Group. Some of their deliverables are soon to include a CI digest, plus contributions from their Framing and Information Management Focus Groups. In addition to the working-group session, Don Middleton of NCAR discussed the role of CI in the atmospheric sciences. I was particularly pleased that Middleton made a point of showcasing semantic aspects of virtual observatories such as the Virtual Solar-Terrestrial Observatory (VSTO).
  • The Tempe Mission Palms Hotel is an outstanding venue for a conference. Net@EDU has themed its annual meetings around this hotel, Tempe, Arizona and the month of February. This strategic choice is delivered in spades by the venue. From individual rooms to conference food and logistics to the mini gym and pool, The Tempe Mission Palms Hotel delivers. 

img_2462.jpg