Pencasting During Lectures in Large Venues

In a recent post on pencasting as a way of teaching/learning weather and climate, I stated:

Monday (October 1, 2012), I intend to use a pencast during my lecture – to introduce aspects of the stability of Earth’s atmosphere. I’ll try to share here how it went. For this intended use of the pencast, I will use a landscape mode for presentation – as I expect that’ll work well in the large lecture hall I teach in. I am, however, a little concerned that the lines I’ll be drawing will be a little too thin/faint for the students at the back of the lecture theatre to see …

I followed through as advertized (above) earlier today.

Image

My preliminary findings are as follows:

  • The visual aspects of the pencast are quite acceptable – This is true even in large lecture halls such as the 500-seat Price Family Cinema at York University (pictured above) in Toronto, Canada where I am currently teaching. I used landscape mode for today’s pencast, and zoomed it in a little. A slightly thicker pen option would be wonderful for such situations … as would different pen colours (the default is green).
  • The audio quality of the pencasts is very good to excellent – Although my Livescribe pen came with a headset/microphone, I don’t use it. I simply use the built-in microphone on the pen, and speak normally when I am developing pencasts. Of course, the audio capabilities of the lecture hall I teach in are most excellent for playback!
  • One-to-many live streaming of pencasts works well – I streamed live directly from myLivescibe today. I believe the application infrastructure is based largely on Adobe Flash and various Web services delivered by Web Objects. Regardless of the technical underpinnings, live streaming worked well. Of course, I could’ve developed a completely self-contained PDF file, downloaded this, and run the pencast locally using Adobe Reader.
  • Personal pencasting works well – I noticed that a number of students were streaming the pencast live for themselves during the lecture. In so doing, they could control interaction with the pencast.

Anecdotally, a few students mentioned that they appreciated the pencast during the break period – my class meets once per for a three-hour session.

Although I’ve yet to hear this feedback directly from the students, I believe I need to:

  • Decrease the duration of pencasts – Today’s lasts about 10 minutes
  • Employ a less-is-more approach/strategy – My pencasts are fairly involved when done …
  • Experiment with the right balance of speaking to penning (is that even a word!?) – Probably a less-is-more approach/strategy would work well here for both the penned and spoken word …

Finally, today’s pencast on the basics of atmospheric stability:

  • Previous approach – Project an illustration taken directly from the course’s text. This is a professionally produced, visually appealing, detailed, end-result, static diagram that I embedded in my presentation software (I use Google Docs for a number of reasons.) Using a laser pointer, my pedagogy called for a systematic deconstruction this diagram – hoping that the students would be engaged enough to actually follow me. Of course, in the captured versions of my lectures, the students don’t actually see where I’m directing the laser pointer. The students have access to the course text and my lecture slides. I have no idea if/how they attempt to ingest and learn from this approach.
  • Pencasting – As discussed elsewhere, the starting point is a blank slate. Using the pencasting technology, I sketch my own rendition of the illustration from the text. As I build up the details, I explain the concept of stability analyses. Because the sketch appears as I speak, the students have the potential to follow me quite closely – and if they miss anything, they can review the pencast after class at their own pace. The end result of a pencast is a sketch that doesn’t hold a candle to the professionally produced illustration provided in the text and my lecture notes. However, to evaluate the pencast as merely a final product, I believe, misses the point completely. Why? I believe the pencast is a far superior way to teach and to learn in situations such as this one. Why? I believe the pencast allows the teacher to focus on communication – communication that the learner can also choose to be highly receptive to, and engaged by.

I still regard myself as very much a neophyte in this arena. However, as the above final paragraphs indicate, pencasting is a disruptive innovation whose value in teaching/learning merits further investigation.

Aakash: A Disruptive Innovation in the Truest Sense

Much has been, and will be, written about the Aakash tablet.

[With apologies for the situational monsoonal imagery …] As I awash myself in Aakash, I am particularly taken by:

  • The order of magnitude reduction in price point. With a stated cost of about $50, marked-up prices are still close to an order of magnitude more affordable than the incumbent offerings (e.g., the iPad, Android-based tablets, etc.). Even Amazon’s Kindle Fire is 2-3 times more expensive.
  • The adoption of Android as the innovation platform. I take this as yet another data point (YADP) in firmly establishing Android as the leading future proofed platform for innovation in the mobile-computing space. As Aakash solidly demonstrates, it’s about the all-inclusive collaboration that can occur when organizational boundaries are made redundant through use of an open platform for innovation. These dynamics just aren’t the same as those that would be achieved by embracing proprietary platforms (e.g., Apple’s iOS, RIM QNX-based O/S, etc.).
  • The Indian origin. It took MIT Being Digital, in the meatspace personage of Nicholas Negroponte, to hatch the One Laptop Per Child initiative. In the case of Aakash, this is grass-roots innovation that has Grameen Bank like possibilities.
While some get distracted comparing/contrasting tech specs, the significant impact of Aakash is that it is a disruptive innovation in the truest sense:
“An innovation that is disruptive allows a whole new population of consumers access to a product or service that was historically only accessible to consumers with a lot of money or a lot of skill.  Characteristics of disruptive businesses, at least in their initial stages, can include:  lower gross margins, smaller target markets, and simpler products and services that may not appear as attractive as existing solutions when compared against traditional performance metrics.”
I am certainly looking forward to seeing this evolve!

Disclaimers:
  • Like Aakash, I am of Indian origin. My Indian origin, however, is somewhat diluted by some English origin – making me an Anglo-Indian. Regardless, my own origin may play some role in my gushing exuberance for Aakash – and hence the need for this disclaimer.
  • I am the owner of a Motorola Xoom, but not an iPad. This may mean I am somewhat predisposed towards the Android platform.
Feel free to chime in with your thoughts on Aakash by commenting on this post.

Targeting Public Speaking Skills via Virtual Environments

Recently I shared an a-ha! moment on the use of virtual environments for confronting the fear of public speaking.

The more I think about it, the more I’m inclined to claim that the real value of such technology is in targeted skills development.

Once again, I’ll use myself as an example here to make my point.

If I think back to my earliest attempts at public speaking as a graduate student, I’d claim that I did a reasonable job of delivering my presentation. And given that the content of my presentation was likely vetted with my research peers (fellow graduate students) and supervisor ahead of time, this left me with a targeted opportunity for improvement: The Q&A session.

Countless times I can recall having a brilliant answer to a question long after my presentation was finished – e.g., on my way home from the event. Not very useful … and exceedingly frustrating.

I would also assert that this lag, between question and appropriate answer, had a whole lot less to do with my expertise in a particular discipline, and a whole lot more to do with my degree nervousness – how else can I explain the ability to fashion perfect answers on the way home!

image006Over time, I like to think that I’ve approved my ability to deliver better-quality answers in real time. How have I improved? Experience. I would credit my experience teaching science to non-scientists at York, as well as my public-sector experience as a vendor representative at industry events, as particularly edifying in this regard.

Rather than submit to such baptisms of fire, and because hindsight is 20/20, I would’ve definitely appreciated the opportunity to develop my Q&A skills in virtual environments such as Nortel web.alive. Why? Such environments can easily facilitate the focused effort I required to target the development of my Q&A skills. And, of course, as my skills improve, so can the challenges brought to bear via the virtual environment.

All speculation at this point … Reasonable speculation that needs to be validated …

If you were to embrace such a virtual environment for the development of your public-speaking skills, which skills would you target? And how might you make use of the virtual environment to do so?

On Knowledge-Based Representations for Actionable Data …

I bumped into a professional acquaintance last week. After describing briefly a presentation I was about to give, he offered to broker introductions to others who might have an interest in the work I’ve been doing. To initiate the introductions, I crafted a brief description of what I’ve been up to for the past 5 years in this area. I’ve also decided to share it here as follows: 

As always, [name deleted], I enjoyed our conversation at the recent AGU meeting in Toronto. Below, I’ve tried to provide some context for the work I’ve been doing in the area of knowledge representations over the past few years. I’m deeply interested in any introductions you might be able to broker with others at York who might have an interest in applications of the same.

Since 2004, I’ve been interested in expressive representations of data. My investigations started with a representation of geophysical data in the eXtensible Markup Language (XML). Although this was successful, use of the approach underlined the importance of metadata (data about data) as an oversight. To address this oversight, a subsequent effort introduced a relationship-centric representation via the Resource Description Format (RDF). RDF, by the way, forms the underpinnings of the next-generation Web – variously known as the Semantic Web, Web 3.0, etc. In addition to taking care of issues around metadata, use of RDF paved the way for increasingly expressive representations of the same geophysical data. For example, to represent features in and of the geophysical data, an RDF-based scheme for annotation was introduced using XML Pointer Language (XPointer). Somewhere around this point in my research, I placed all of this into a framework.

A data-centric framework for knowledge representation.

A data-centric framework for knowledge representation.

 In addition to applying my Semantic Framework to use cases in Internet Protocol (IP) networking, I’ve continued to tease out increasingly expressive representations of data. Most recently, these representations have been articulated in RDFS – i.e., RDF Schema. And although I have not reached the final objective of an ontological representation in the Web Ontology Language (OWL), I am indeed progressing in this direction. (Whereas schemas capture the vocabulary of an application domain in geophysics or IT, for example, ontologies allow for knowledge-centric conceptualizations of the same.)  

From niche areas of geophysics to IP networking, the Semantic Framework is broadly applicable. As a workflow for systematically enhancing the expressivity of data, the Framework is based on open standards emerging largely from the World Wide Web Consortium (W3C). Because there is significant interest in this next-generation Web from numerous parties and angles, implementation platforms allow for increasingly expressive representations of data today. In making data actionable, the ultimate value of the Semantic Framework is in providing a means for integrating data from seemingly incongruous disciplines. For example, such representations are actually responsible for providing new results – derived by querying the representation through a ‘semantified’ version of the Structured Query Language (SQL) known as SPARQL. 

I’ve spoken formally and informally about this research to audiences in the sciences, IT, and elsewhere. With York co-authors spanning academic and non-academic staff, I’ve also published four refereed journal papers on aspects of the Framework, and have an invited book chapter currently under review – interestingly, this chapter has been contributed to a book focusing on data management in the Semantic Web. Of course, I’d be pleased to share any of my publications and discuss aspects of this work with those finding it of interest.

With thanks in advance for any connections you’re able to facilitate, Ian. 

If anything comes of this, I’m sure I’ll write about it here – eventually!

In the meantime, feedback is welcome.

Recent Articles on Bright Hub

I’ve added a few more articles over on Bright Hub:

Google Chrome for Linux on Bright Hub: Series Expanded

I recently posted on a new article series on Google Chrome for Linux that I’ve been developing over on Bright Hub. My exploration has turned out to be more engaging than I anticipated! At the moment, there are six articles in the series:

I anticipate a few more …

It’s also important to share that Google Chrome for Linux does not yet exist as an end-user application. Under the auspices of the Chromium Project, however, there is a significant amount of work underway. And because this work is taking place out in the open (Chromiun is an Open Source Project), now is an excellent time to engage – especially for serious enthusiasts.

Google Should Not Be Making Mac and Linux Users Wait for Chrome!

Google should not be making Mac and Linux users wait for Chrome.

I know:

  • There’s a significant guerrilla-marketing campaign in action – the officially unstated competition with Microsoft for ‘world domination’. First Apple (with Safari), and now Google (with Chrome), is besting Microsoft Internet Explorer on Windows platforms. In revisiting the browser wars of the late nineties, it’s crucial for Google Chrome to go toe-to-toe with the competition. And whether we like to admit it or not, that competition is Microsoft Internet Explorer on the Microsoft Windows platform.
  • The Mac and Linux ports will come from the Open Source’ing of Chrome … and we need to wait for this … Optimistically, that’s short-term pain, long-term gain.

BUT:

  • Google is risking alienating its Mac and Linux faithful … and this is philosophically at odds with all-things Google.
  • It’s 2008, not 1998. In the past, as an acknowledged fringe community, Mac users were accustomed to the 6-18 month lag in software availability. Linux users, on the other hand, were often satiated by me-too feature/functionality made available by the Open Source community. In 2008, however, we have come to expect support to appear simultaneously on Mac, Linux and Windows platforms. For example, Open Source Mozilla releases their flagship Firefox browser (as well as their Thunderbird email application) simultaneously on Mac and Linux as well as Windows platforms. Why not Chrome?

So, what should Google do in the interim:

  • Provide progress updates on a regular basis. Google requested email addresses from those Mac and Linux users interested in Chrome … Now they need to use them!
  • Continue to engage Mac/Linux users. The Chromium Blog, Chromium-Announce, Chromium-discuss, Chromium – Google Code, etc., comprise an excellent start. Alpha and beta programs, along the lines of Mozilla’s, might also be a good idea …
  • Commence work on ‘Browser War’ commercials. Apple’s purposefully understated commercials exploit weaknesses inherent in Microsoft-based PCs to promote their Macs. Microsoft’s fired back with (The Real) Bill Gates and comedian Jerry Seinfeld to … well … confuse us??? Shift to browsers. Enter Google. Enter Mozilla. Just think how much fun we’d all have! Surely Google can afford a few million to air an ad during Super Bowl XLII! Excessive? Fine. I’ll take the YouTube viral version at a fraction of the cost then … Just do it!

For now, the Pareto (80-20) principle remains in play. And although this drives a laser-sharp focus on Microsoft Internet Explorer on the Microsoft Windows platform at the outset, Google has to shift swiftly to Mac and Linux to really close on the disruptiveness of Chrome’s competitive volley.

And I, for one, can’t wait!

CANHEIT 2008: Enhanced Abstract

The program specifics for CANHEIT 2008 are becoming available online.
The enhanced abstract for one of my presentations is as follows:

From the Core to the Edge: Automating Awareness of Network Topology through Knowledge Representation

Ian Lumb – Manager Network Operations, Computing and Network Services (York University)

Abstract

Like many other institutions of higher education, York University makes extensive use of Open Source software. This is especially true in the case of monitoring and managing IP (Internet Protocol) devices. On the monitoring front, extensive manual configuration is currently required to make monitoring solutions (e.g., NAGIOS) aware of the topology of the York network. And with respect to managing, NetDisco automatically discovers assets placed on the network, but is unable to abstract away unnecessary complexity in, e.g., rendering schematics of the network topology. These and other examples suggest that NAGIOS and NetDisco operate in the realm of data, and possibly information, but are unable to envisage network topology from a knowledge-representation perspective. Thus the current focus is on applying a recently developed knowledge-representation platform to such routine requirements in network monitoring and management. The platform is based on Sematic Web standards and implementations and has already been proven effective in various scientific contexts. Ultimately our objective is to extract data automatically discovered by NetDisco, represent it using the knowledge-based platform, and transform a topology-aware representation of the data into configuration data that can be ingested by NAGIOS.

A visual representation of the approach is illustrated below.

CANHEIT 2008: York Involvement

York University will be well represented at CANHEIT 2008
Although you’ll find the details in CANHEIT’s online programme, allow me to whet your appetite regarding our contributions:

Synced-Data Applications: The Bastard Child of Convergence

At the Search Engine Strategies Conference in August 2006, in an informal conversation, Google CEO Eric Schmidt stated:

What’s interesting [now] is that there is an emergent new model, and you all are here because you are part of that new model. I don’t think people have really understood how big this opportunity really is. It starts with the premise that the data services and architecture should be on servers. We call it cloud computing – they should be in a “cloud” somewhere. And that if you have the right kind of browser or the right kind of access, it doesn’t matter whether you have a PC or a Mac or a mobile phone or a BlackBerry or what have you – or new devices still to be developed – you can get access to the cloud. There are a number of companies that have benefited from that. Obviously, Google, Yahoo!, eBay, Amazon come to mind. The computation and the data and so forth are in the servers.

My interpretation of cloud computing is summarized in the following figure.


Yesterday, I introduced the concept of Synced-Data Applications (SDAs). SDAs are summarized in the following figure.


SDAs owe their existence to the convergence of the cloud and the desktop/handheld.