Pencasting During Lectures in Large Venues

In a recent post on pencasting as a way of teaching/learning weather and climate, I stated:

Monday (October 1, 2012), I intend to use a pencast during my lecture – to introduce aspects of the stability of Earth’s atmosphere. I’ll try to share here how it went. For this intended use of the pencast, I will use a landscape mode for presentation – as I expect that’ll work well in the large lecture hall I teach in. I am, however, a little concerned that the lines I’ll be drawing will be a little too thin/faint for the students at the back of the lecture theatre to see …

I followed through as advertized (above) earlier today.

Image

My preliminary findings are as follows:

  • The visual aspects of the pencast are quite acceptable – This is true even in large lecture halls such as the 500-seat Price Family Cinema at York University (pictured above) in Toronto, Canada where I am currently teaching. I used landscape mode for today’s pencast, and zoomed it in a little. A slightly thicker pen option would be wonderful for such situations … as would different pen colours (the default is green).
  • The audio quality of the pencasts is very good to excellent – Although my Livescribe pen came with a headset/microphone, I don’t use it. I simply use the built-in microphone on the pen, and speak normally when I am developing pencasts. Of course, the audio capabilities of the lecture hall I teach in are most excellent for playback!
  • One-to-many live streaming of pencasts works well – I streamed live directly from myLivescibe today. I believe the application infrastructure is based largely on Adobe Flash and various Web services delivered by Web Objects. Regardless of the technical underpinnings, live streaming worked well. Of course, I could’ve developed a completely self-contained PDF file, downloaded this, and run the pencast locally using Adobe Reader.
  • Personal pencasting works well – I noticed that a number of students were streaming the pencast live for themselves during the lecture. In so doing, they could control interaction with the pencast.

Anecdotally, a few students mentioned that they appreciated the pencast during the break period – my class meets once per for a three-hour session.

Although I’ve yet to hear this feedback directly from the students, I believe I need to:

  • Decrease the duration of pencasts – Today’s lasts about 10 minutes
  • Employ a less-is-more approach/strategy – My pencasts are fairly involved when done …
  • Experiment with the right balance of speaking to penning (is that even a word!?) – Probably a less-is-more approach/strategy would work well here for both the penned and spoken word …

Finally, today’s pencast on the basics of atmospheric stability:

  • Previous approach – Project an illustration taken directly from the course’s text. This is a professionally produced, visually appealing, detailed, end-result, static diagram that I embedded in my presentation software (I use Google Docs for a number of reasons.) Using a laser pointer, my pedagogy called for a systematic deconstruction this diagram – hoping that the students would be engaged enough to actually follow me. Of course, in the captured versions of my lectures, the students don’t actually see where I’m directing the laser pointer. The students have access to the course text and my lecture slides. I have no idea if/how they attempt to ingest and learn from this approach.
  • Pencasting – As discussed elsewhere, the starting point is a blank slate. Using the pencasting technology, I sketch my own rendition of the illustration from the text. As I build up the details, I explain the concept of stability analyses. Because the sketch appears as I speak, the students have the potential to follow me quite closely – and if they miss anything, they can review the pencast after class at their own pace. The end result of a pencast is a sketch that doesn’t hold a candle to the professionally produced illustration provided in the text and my lecture notes. However, to evaluate the pencast as merely a final product, I believe, misses the point completely. Why? I believe the pencast is a far superior way to teach and to learn in situations such as this one. Why? I believe the pencast allows the teacher to focus on communication – communication that the learner can also choose to be highly receptive to, and engaged by.

I still regard myself as very much a neophyte in this arena. However, as the above final paragraphs indicate, pencasting is a disruptive innovation whose value in teaching/learning merits further investigation.

Teaching/Learning Weather and Climate via Pencasting

I first heard about it a few years ago, and thought it sounded interesting … and then, this past Summer, I did a little more research and decided to purchase a Livescribe 8 GB Echo(TM) Pro Pack. Over the Summer, I took notes with the pen from time-to-time and found it to be somewhat useful/interesting.

Just this week, however, I decided it was time to use the pen for the originally intended purpose: Making pencasts for the course I’m currently teaching in weather and climate at Toronto’s York University. Before I share some sample pencasts, please allow me to share my findings based on less than a week’s worth of `experience’:

  • Decent-quality pencasts can be produced with minimal effort – I figured out the basics (e.g., how to record my voice) in a few minutes, and started on my first pencast. Transferring the pencast from the pen to the desktop software to the Web (where it can be shared with my students) also requires minimal effort. “Decent quality” here refers to both the visual and audio elements. The fact that this is both a very natural (writing with a pen while speaking!) and speedy (efficient/effective) undertaking means that I am predisposed towards actually using the technology whenever it makes sense – more on that below. Net-net: This solution is teacher-friendly.
  • Pencasts compliment other instructional media – This is my current perspective … Pencasts compliment the textbook readings I assign, the lecture slides plus video/audio captures I provide, the Web sites we all share, the Moodle discussion forums we engage in, the Tweets I issue, etc. In the spirit of blended learning it is my hope that pencasts, in concert with these other instructional media, will allow my TAs and I to `reach’ most of the students in the course.
  • Pencasts allow the teacher to address both content and skills-oriented objectives – Up to this point, my pencasts have started from a blank page. This forces me to be focused, and systematically develop towards some desired content (e.g., conceptually introducing the phase diagram for H2O) and/or skills (e.g., how to calculate the slope of a line on a graph) oriented outcome. Because students can follow along, they have the opportunity to be fully engaged as the pencast progresses. Of course, what this also means is that this technology can be as effective in the first-year university level course I’m currently teaching, but also at the academic levels that precede (e.g., grade school, high school, etc.) and follow (senior undergraduate and graduate) this level.
  • Pencasts are learner-centric – In addition to be teacher-friendly, pencasts are learner-centric. Although a student could passively watch and listen to a pencast as it plays out in a linear, sequential fashion, the technology almost begs you to interact with it. As noted previously, this means a student can easily replay some aspect of the pencast that they missed. Even more interestingly, however, students can interact with pencasts in a random-access mode – a mode that would almost certainly be useful when they are attempting to apply the content/skills conveyed through the pencast to a tutorial or assignment they are working on, or a quiz or exam they are actively studying for. It is important to note that both the visual and audio elements of the pencast can be manipulated with impressive responsiveness to random-access input from the student.
  • I’m striving for authentic, not perfect pencasts – With a little more practice and some planning/scripting, I’d be willing to bet that I could produce an extremely polished pencast. Based on past experience teaching today’s first-year university students, I’m fairly convinced that this is something they couldn’t care less about. Let’s face it, my in-person lectures aren’t perfectly polished, and neither are my pencasts. Because I can easily go back to existing pencasts and add to them, I don’t need to fret too much about being perfect the first time. Too much time spent fussing here would diminish the natural and speedy aspects of the technology.

Findings aside, on to samples:

  • Calculating the lapse rate for Earth’s troposphere – This is a largely a skills-oriented example. It was my first pencast. I returned twice to the original pencast to make changes – once to correct a spelling mistake, and the second time to add in a bracket (“Run”) that I forgot. I communicated these changes to the students in the course via an updated link shared through a Moodle forum dedicated to pencasts. If you were to experience the updates, you’d almost be unaware of the lapse of time between the original pencast and the updates, as all of this is presented seamlessly as a single pencast to the students.
  • Introducing the pressure-temperature phase diagram for H2O – This is largely a content-oriented example. I got a little carried away in this one, and ended up packing in a little too much – the pencast is fairly long, and by the time I’m finished, the visual element is … a tad on the busy side. Experience gained.

Anecdotally, initial reaction from the students has been positive. Time will tell.

Next steps:

  • Monday (October 1, 2012), I intend to use a pencast during my lecture – to introduce aspects of the stability of Earth’s atmosphere. I’ll try to share here how it went. For this intended use of the pencast, I will use a landscape mode for presentation – as I expect that’ll work well in the large lecture hall I teach in. I am, however, a little concerned that the lines I’ll be drawing will be a little too thin/faint for the students at the back of the lecture theatre to see …
  • I have two sections of the NATS 1780 Weather and Climate course to teach this year. One section is taught the traditional way – almost 350 students in a large lecture theatre, 25-student tutorial groups, supported by Moodle, etc. In striking contrast to the approach taken in the meatspace section, is the second section where almost everything takes place online via Moodle. Although I have yet to support this hypothesis with any data, it is my belief that these pencasts are an excellent way to reach out to the students in the Internet-only section of the course. More on this over the fullness of time (i.e., the current academic session.)

Feel free to comment on this post or share your own experiences with pencasts.

Synthetic Life and Evolution of Earth’s Second Atmosphere

I have the pleasure of teaching the science of weather and climate to non-scientists again this Fall/Winter session at Toronto’s York University. In the Fall 2011 Term, time was spent discussing the origin and evolution of Earth’s atmosphere. What follows is a post I just shared with the class via Moodle (our LMS):
Photosynthesizing anaerobic lifeforms in Earth’s oceans were likely responsible for systematically enriching Earth’s atmosphere with respect to O2. Through chemical reactions in Earth’s atmosphere, O3 and the O3 layer were systematically derived from this same source of O2. The O3 layer’s ability to minimize the impact of harmful UV radiation, in tandem with the ascent of [O2] to current values of about 21% by volume, were and remain crucial to life as we experience it today.

In tracing the evolution of Earth’s second atmosphere from a composition based on volcanic outgassing to its present state, the role of life was absolutely critical.

On my drive home tonight after today’s lecture, I happened upon a broadcast regarding synthetic life on CBC Radio‘s Ideas. Based upon annotated excerpts from a Craig Venter lecture, this broadcast is well worth the listen in and of itself. And although I’m no life scientist, I can’t help but predict that Venter’s work will ultimately lead to refinements, if not a complete rewrite, of life’s role in the evolution of Earth’s second atmosphere.
If you have any thoughts on this prediction, please feel free to share them here via a comment.

Early Win Required for Partner-Friendly, Post-Acquisition Platform Computing

Further to the LinkedIn discussion on the relatively recent acquisition of Platfom by IBM, I just posted:

Platform CEO and Founder Songnian Zhou has this to say regarding the kernel of this discussion:

“IBM expects Platform to operate as a coherent business unit within its Systems and Technology Group. We got some promises from folks at IBM. We will accelerate our investments and growth. We will deliver on our product roadmaps. We will continue to provide our industry-best support and services. We will work even harder to add value to our partners, including IBM’s competitors. We want to make new friends while keeping the old, for one is silver while the other is gold. We might even get to keep our brand name. After all, distributed computing needs a platform, and there is only one Platform Computing. We are an optimistic bunch. We want to deliver to you the best of both worlds – you know what I mean. Give us a chance to show you what we can do for you tomorrow. Our customers and partners have journeyed with Platform all these years and have not regretted it. We are grateful to them eternally.”

Unsurprisingly upbeat, Zhou, Platform and IBM, really do require that customers and partners give them a chance to prove themselves under the new business arrangement. As noted in my previous comment in this discussion, this’ll require some seriously skillful stickhandling to skirt around challenging issues such as IP (Intellectual Property) – a challenge that is particularly exacerbated by the demands of the tightly coupled integrations required to deliver tangible value in the HPC context.

How might IBM-acquired Platform best demonstrate that it’s true to its collective word:

“Give us a chance to show you what we can do for you tomorrow.”

Certainly one way, is to strike an early win with a partner that demonstrates that they (Zhou, Platform and IBM) are true to their collective word. Aspects of this demonstration should likely include:

  • IP handling disclosures. Post-acquisition Platform and the partner should be as forthcoming as possible with respect to IP (Intellectual Property) handling – i.e., they should collectively communicate how business and technical IP challenges were handled in practice.
  • Customer validation. Self-explicit, such a demonstration has negligible value without validation by a customer willing to publicly state why they are willing to adopt the corresponding solution.
  • HPC depth. This demonstration has to be comprised of a whole lot more than merely porting a Platform product to a partner’s platform that would be traditionally viewed as a competitive to IBM. As stated previously, herein lies the conundrum: “To deliver a value-rich solution in the HPC context, Platform has to work (extremely) closely with the ‘system vendor’.  In many cases, this closeness requires that Intellectual Property (IP) of a technical and/or business nature be communicated …”

Thus, as the fullness of time shifts to post-acquisition Platform, trust becomes the watchword for continued success – particularly in HPC.

For without trust, there will be no opportunity for demonstrations such as the early win outlined here.

How else might Platform-acquired IBM demonstrate that it’s business-better-than-usual?

Feel free to add your $0.02.

IBM-Acquired Platform: Plan for Sustained, Partner-Friendly HPC Innovation Required

Over on LinkedIn, there’s an interesting discussion taking place in the “High Performance & Super Computing” group on the recently announced acquisition of Markham-based Platform Computing by IBM. My comment (below) was stimulated by concerns regarding the implications of this acquisition for IBM’s traditional competitors (i.e., other system vendors such as Cray, Dell, HP, etc.):

It could be argued:

“IBM groks vendor-neutral software and services (e.g., IBM Global Services), and therefore coopetition.”

At face value then, it’ll be business-as-usual for IBM-acquired Platform – and therefore its pre-acquisition partners and customers.

While business-as-usual plausibly applies to porting Platform products to offerings from IBM’s traditional competitors, I believe the sensitivity to the new business relationship (Platform as an IBM business unit) escalates rapidly for any solution that has value in HPC.

Why?

To deliver a value-rich solution in the HPC context, Platform has to work (extremely) closely with the ‘system vendor’. In many cases, this closeness requires that Intellectual Property (IP) of a technical and/or business nature be communicated – often well before solutions are introduced to the marketplace and made available for purchase. Thus Platform’s new status as an IBM entity, has the potential to seriously complicate matters regarding risk, trust, etc., relating to the exchange of IP.

Although it’s been stated elsewhere that IBM will allow Platform measures of post-acquisition independence, I doubt that this’ll provide sufficient comfort for matters relating to IP. While NDAs specific to the new (and independent) Platform business unit within IBM may offer some measure of additional comfort, I believe that technically oriented approaches offer the greatest promise for mitigating concerns relating to risk, trust, etc., in the exchange of IP.

In principle, one possibility is the adoption of open standards by all stakeholders. Such standards hold the promise of allowing for the integration between products via documented interfaces and protocols, while allowing (proprietary) implementation specifics to remain opaque. Although this may sound appealing, the availability of such standards remains elusive – despite various, well-intended efforts (by HPC, Grid, Cloud, etc., communities).

While Platform’s traditional competitors predictably and understandably gorge themselves sharing FUD, it obviously behooves both Platform and IBM to expend some effort allaying the concerns of their customers and partner ecosystem.

I’d be interested to hear of others’ suggestions as to how this new business relationship might allow for sustained innovation in the HPC context from IBM-acquired Platform.

Disclaimer: Although I do not have a vested financial interest in this acquisition, I did work for Platform from 1998-2005.

To reiterate here then:

How can this new business relationship allow for sustained, partner-friendly innovation in the HPC context from IBM-acquired Platform?

Please feel free to share your thoughts on this via comments to this post.

Confronting the Fear of Public Speaking via Virtual Environments

Confession: In the past, I’ve been extremely quick to dismiss the value of Second Life in the context of teaching and learning.

Even worse, my dismissal was not fact-based … and, if truth be told, I’ve gone out of my way to avoid opportunities to ‘gather the facts’ by attending presentations at conferences, conducting my own research online, speaking with my colleagues, etc.

So I, dear reader, am as surprised as any of you to have had an egg-on-my-face epiphany this morning …

Please allow me to elaborate:

It was at some point during this morning’s brainstorming session that the egg hit me squarely in the face:

Why not use Nortel web.alive to prepare graduate students for presenting their research?

Often feared more than death and taxes, public speaking is an essential aspect of academic research – regardless of the discipline.

image004Enter Nortel web.alive with its virtual environment of a large lecture hall – complete with a podium, projection screen for sharing slides, and most importantly an audience!

As a former graduate student, I could easily ‘see’ myself in this environment with increasingly realistic audiences comprised of friends, family and/or pets, fellow graduate students, my research supervisor, my supervisory committee, etc. Because Nortel web.alive only requires a Web browser, my audience isn’t geographically constrained. This geographical freedom is important as it allows for participation – e.g., between graduate students at York in Toronto and their supervisor who just happens to be on sabbatical in the UK. (Trust me, this happens!)

As the manager of Network Operations at York, I’m always keen to encourage novel use of our campus network. The public-speaking use case I’ve described here has the potential to make innovative use of our campus network, regional network (GTAnet), provincial network (ORION), and even national network (CANARIE) that would ultimately allow for global connectivity.

While I busy myself scraping the egg off my face, please chime in with your feedback. Does this sound useful? Are you aware of other efforts to use virtual environments to confront the fear of public speaking? Are there related applications that come to mind for you? (As someone who’s taught classes of about 300 students in large lecture halls, a little bit of a priori experimentation in a virtual environment would’ve been greatly appreciated!)

Update (November 13, 2009): I just Google’d the title of this article and came up with a few, relevant hits; further research is required.

On Knowledge-Based Representations for Actionable Data …

I bumped into a professional acquaintance last week. After describing briefly a presentation I was about to give, he offered to broker introductions to others who might have an interest in the work I’ve been doing. To initiate the introductions, I crafted a brief description of what I’ve been up to for the past 5 years in this area. I’ve also decided to share it here as follows: 

As always, [name deleted], I enjoyed our conversation at the recent AGU meeting in Toronto. Below, I’ve tried to provide some context for the work I’ve been doing in the area of knowledge representations over the past few years. I’m deeply interested in any introductions you might be able to broker with others at York who might have an interest in applications of the same.

Since 2004, I’ve been interested in expressive representations of data. My investigations started with a representation of geophysical data in the eXtensible Markup Language (XML). Although this was successful, use of the approach underlined the importance of metadata (data about data) as an oversight. To address this oversight, a subsequent effort introduced a relationship-centric representation via the Resource Description Format (RDF). RDF, by the way, forms the underpinnings of the next-generation Web – variously known as the Semantic Web, Web 3.0, etc. In addition to taking care of issues around metadata, use of RDF paved the way for increasingly expressive representations of the same geophysical data. For example, to represent features in and of the geophysical data, an RDF-based scheme for annotation was introduced using XML Pointer Language (XPointer). Somewhere around this point in my research, I placed all of this into a framework.

A data-centric framework for knowledge representation.

A data-centric framework for knowledge representation.

 In addition to applying my Semantic Framework to use cases in Internet Protocol (IP) networking, I’ve continued to tease out increasingly expressive representations of data. Most recently, these representations have been articulated in RDFS – i.e., RDF Schema. And although I have not reached the final objective of an ontological representation in the Web Ontology Language (OWL), I am indeed progressing in this direction. (Whereas schemas capture the vocabulary of an application domain in geophysics or IT, for example, ontologies allow for knowledge-centric conceptualizations of the same.)  

From niche areas of geophysics to IP networking, the Semantic Framework is broadly applicable. As a workflow for systematically enhancing the expressivity of data, the Framework is based on open standards emerging largely from the World Wide Web Consortium (W3C). Because there is significant interest in this next-generation Web from numerous parties and angles, implementation platforms allow for increasingly expressive representations of data today. In making data actionable, the ultimate value of the Semantic Framework is in providing a means for integrating data from seemingly incongruous disciplines. For example, such representations are actually responsible for providing new results – derived by querying the representation through a ‘semantified’ version of the Structured Query Language (SQL) known as SPARQL. 

I’ve spoken formally and informally about this research to audiences in the sciences, IT, and elsewhere. With York co-authors spanning academic and non-academic staff, I’ve also published four refereed journal papers on aspects of the Framework, and have an invited book chapter currently under review – interestingly, this chapter has been contributed to a book focusing on data management in the Semantic Web. Of course, I’d be pleased to share any of my publications and discuss aspects of this work with those finding it of interest.

With thanks in advance for any connections you’re able to facilitate, Ian. 

If anything comes of this, I’m sure I’ll write about it here – eventually!

In the meantime, feedback is welcome.

Blended Learning Panel

York University’s Institute for Research on Learning Technologies is sponsoring a panel discussion on blended learning:

“A recent workplace survey reported by Brandon Hall Publishing (2008) indicates that employing a mix of web-technologies with face-to-face learning is more effective than either e-learning or face-to-face instructional approaches alone. To explore the use and potential of “blended learning” further, please join us for a panel discussion featuring experts from various fields …”

This event has been re-scheduled for April 2, 2009 at 12:15 pm in TEL 1009 at York’s Keele Campus. I anticipate a lively and interesting discussion!

(Please check the IRLT Web site for the latest updates on the event.)

ORION/CANARIE National Summit

Just in case you haven’t heard:

… join us for an exciting national summit on innovation and technology, hosted by ORION and CANARIE, at the Metro Toronto Convention Centre, Nov. 3 and 4, 2008.

“Powering Innovation – a National Summit” brings over 55 keynotes, speakers and panelist from across Canada and the US, including best-selling author of Innovation Nation, Dr. John Kao; President/CEO of Intenet2 Dr. Doug Van Houweling; chancellor of the University of California at Berkeley Dr. Robert J. Birgeneau; advanced visualization guru Dr. Chaomei Chen of Philadelphia’s Drexel University; and many more. The President of the Ontario College of Art & Design’s Sara Diamond chairs “A Boom with View”, a session on visualization technologies. Dr. Gail Anderson presents on forensic science research. Other speakers include the host of CBC Radio’s Spark Nora Young; Delvinia Interactive’s Adam Froman and the President and CEO of Zerofootprint, Ron Dembo.

This is an excellent opportunity to meet and network with up to 250 researchers, scientists, educators, and technologists from across Ontario and Canada and the international community. Attend sessions on the very latest on e-science; network-enabled platforms, cloud computing, the greening of IT; applications in the “cloud”; innovative visualization technologies; teaching and learning in a web 2.0 universe and more. Don’t miss exhibitors and showcases from holographic 3D imaging, to IP-based television platforms, to advanced networking.

For more information, visit http://www.orioncanariesummit.ca.