Confronting the Fear of Public Speaking via Virtual Environments

Confession: In the past, I’ve been extremely quick to dismiss the value of Second Life in the context of teaching and learning.

Even worse, my dismissal was not fact-based … and, if truth be told, I’ve gone out of my way to avoid opportunities to ‘gather the facts’ by attending presentations at conferences, conducting my own research online, speaking with my colleagues, etc.

So I, dear reader, am as surprised as any of you to have had an egg-on-my-face epiphany this morning …

Please allow me to elaborate:

It was at some point during this morning’s brainstorming session that the egg hit me squarely in the face:

Why not use Nortel web.alive to prepare graduate students for presenting their research?

Often feared more than death and taxes, public speaking is an essential aspect of academic research – regardless of the discipline.

image004Enter Nortel web.alive with its virtual environment of a large lecture hall – complete with a podium, projection screen for sharing slides, and most importantly an audience!

As a former graduate student, I could easily ‘see’ myself in this environment with increasingly realistic audiences comprised of friends, family and/or pets, fellow graduate students, my research supervisor, my supervisory committee, etc. Because Nortel web.alive only requires a Web browser, my audience isn’t geographically constrained. This geographical freedom is important as it allows for participation – e.g., between graduate students at York in Toronto and their supervisor who just happens to be on sabbatical in the UK. (Trust me, this happens!)

As the manager of Network Operations at York, I’m always keen to encourage novel use of our campus network. The public-speaking use case I’ve described here has the potential to make innovative use of our campus network, regional network (GTAnet), provincial network (ORION), and even national network (CANARIE) that would ultimately allow for global connectivity.

While I busy myself scraping the egg off my face, please chime in with your feedback. Does this sound useful? Are you aware of other efforts to use virtual environments to confront the fear of public speaking? Are there related applications that come to mind for you? (As someone who’s taught classes of about 300 students in large lecture halls, a little bit of a priori experimentation in a virtual environment would’ve been greatly appreciated!)

Update (November 13, 2009): I just Google’d the title of this article and came up with a few, relevant hits; further research is required.

On Knowledge-Based Representations for Actionable Data …

I bumped into a professional acquaintance last week. After describing briefly a presentation I was about to give, he offered to broker introductions to others who might have an interest in the work I’ve been doing. To initiate the introductions, I crafted a brief description of what I’ve been up to for the past 5 years in this area. I’ve also decided to share it here as follows: 

As always, [name deleted], I enjoyed our conversation at the recent AGU meeting in Toronto. Below, I’ve tried to provide some context for the work I’ve been doing in the area of knowledge representations over the past few years. I’m deeply interested in any introductions you might be able to broker with others at York who might have an interest in applications of the same.

Since 2004, I’ve been interested in expressive representations of data. My investigations started with a representation of geophysical data in the eXtensible Markup Language (XML). Although this was successful, use of the approach underlined the importance of metadata (data about data) as an oversight. To address this oversight, a subsequent effort introduced a relationship-centric representation via the Resource Description Format (RDF). RDF, by the way, forms the underpinnings of the next-generation Web – variously known as the Semantic Web, Web 3.0, etc. In addition to taking care of issues around metadata, use of RDF paved the way for increasingly expressive representations of the same geophysical data. For example, to represent features in and of the geophysical data, an RDF-based scheme for annotation was introduced using XML Pointer Language (XPointer). Somewhere around this point in my research, I placed all of this into a framework.

A data-centric framework for knowledge representation.

A data-centric framework for knowledge representation.

 In addition to applying my Semantic Framework to use cases in Internet Protocol (IP) networking, I’ve continued to tease out increasingly expressive representations of data. Most recently, these representations have been articulated in RDFS – i.e., RDF Schema. And although I have not reached the final objective of an ontological representation in the Web Ontology Language (OWL), I am indeed progressing in this direction. (Whereas schemas capture the vocabulary of an application domain in geophysics or IT, for example, ontologies allow for knowledge-centric conceptualizations of the same.)  

From niche areas of geophysics to IP networking, the Semantic Framework is broadly applicable. As a workflow for systematically enhancing the expressivity of data, the Framework is based on open standards emerging largely from the World Wide Web Consortium (W3C). Because there is significant interest in this next-generation Web from numerous parties and angles, implementation platforms allow for increasingly expressive representations of data today. In making data actionable, the ultimate value of the Semantic Framework is in providing a means for integrating data from seemingly incongruous disciplines. For example, such representations are actually responsible for providing new results – derived by querying the representation through a ‘semantified’ version of the Structured Query Language (SQL) known as SPARQL. 

I’ve spoken formally and informally about this research to audiences in the sciences, IT, and elsewhere. With York co-authors spanning academic and non-academic staff, I’ve also published four refereed journal papers on aspects of the Framework, and have an invited book chapter currently under review – interestingly, this chapter has been contributed to a book focusing on data management in the Semantic Web. Of course, I’d be pleased to share any of my publications and discuss aspects of this work with those finding it of interest.

With thanks in advance for any connections you’re able to facilitate, Ian. 

If anything comes of this, I’m sure I’ll write about it here – eventually!

In the meantime, feedback is welcome.

ORION/CANARIE National Summit

Just in case you haven’t heard:

… join us for an exciting national summit on innovation and technology, hosted by ORION and CANARIE, at the Metro Toronto Convention Centre, Nov. 3 and 4, 2008.

“Powering Innovation – a National Summit” brings over 55 keynotes, speakers and panelist from across Canada and the US, including best-selling author of Innovation Nation, Dr. John Kao; President/CEO of Intenet2 Dr. Doug Van Houweling; chancellor of the University of California at Berkeley Dr. Robert J. Birgeneau; advanced visualization guru Dr. Chaomei Chen of Philadelphia’s Drexel University; and many more. The President of the Ontario College of Art & Design’s Sara Diamond chairs “A Boom with View”, a session on visualization technologies. Dr. Gail Anderson presents on forensic science research. Other speakers include the host of CBC Radio’s Spark Nora Young; Delvinia Interactive’s Adam Froman and the President and CEO of Zerofootprint, Ron Dembo.

This is an excellent opportunity to meet and network with up to 250 researchers, scientists, educators, and technologists from across Ontario and Canada and the international community. Attend sessions on the very latest on e-science; network-enabled platforms, cloud computing, the greening of IT; applications in the “cloud”; innovative visualization technologies; teaching and learning in a web 2.0 universe and more. Don’t miss exhibitors and showcases from holographic 3D imaging, to IP-based television platforms, to advanced networking.

For more information, visit http://www.orioncanariesummit.ca.

Juniper Seminar: Key Takeaways

Yesterday, I attended the Toronto session of a Juniper seminar focused on security and datacenter solutions.

The following are the key takeaways I extracted:

  • Juniper is standards-oriented. In the area of NAC, e.g., they are co-chairing with Symantec the Trusted Computing Group‘s Trusted Network Connect (TNC) effort. It’s not (yet) clear to me how the TCG interplays with the IETF … And speaking of IETF, Juniper’s Network and Security Manager (NSM) makes use of IETF’s NetConf standard in, e.g., simplifying the provisioning of new devices on the network.
  • Juniper has a comprehensive portfolio of offerings at the intersection of security and networking. Interestingly, Juniper’s Security Threat Response Manager (STRM) OEMs technology from Q1Labs.
  • 802.1x is a solid bet. Based on a number of trends, and a variety of requirements, Juniper promotes use of 802.1x. Even though this is a path we’ve already identified, it’s good to have it independently validated …
  • Security, and other services, can be offloaded to purpose-built devices in the core. Instead of inserting, e.g., a FWSM into a device (e.g., a Cisco 65xx) that is primarily providing routing and switching services, Juniper has recently introduced a new paradigm with its SRX series. Touted as a services gateway for the core, the purpose of the SRX is to offload from the routing/switching devices various services – e.g., firewall, VPN, etc. As I understand it, the SRX runs JUNOS with various enhancements from ScreenOS (their O/S from their firewall devices). Even if you don’t make use of Juniper solutions, it may make sense to understand and potentially apply the offloading-of-services concept/paradigm in your core.
  • Juniper allows for the virtualization of switches. Juniper Virtual Chassis (VC) is currently only available for their EX 4200 platform. With VC, it’s possible to virtualize up to 10 physically distinct EX 4200s into one. Within the next year, Juniper plans to provide VC on, e.g., their EX 8200 platform. Because vmWare’s vMotion requires layer-2 adjacency, server virtualization may prove to be a significant driver for switch virtualization. I expect that this will prove, e.g., to be particularly relevant in providing failover services (at the networking layer) between multiple, physically distinct, and geographically separated locations.

Even though the event appeared to be more of the sales-y/marketing-y variety, there was substantial technical content in evidence.

CANHEIT 2008: Enhanced Abstract

The program specifics for CANHEIT 2008 are becoming available online.
The enhanced abstract for one of my presentations is as follows:

From the Core to the Edge: Automating Awareness of Network Topology through Knowledge Representation

Ian Lumb – Manager Network Operations, Computing and Network Services (York University)

Abstract

Like many other institutions of higher education, York University makes extensive use of Open Source software. This is especially true in the case of monitoring and managing IP (Internet Protocol) devices. On the monitoring front, extensive manual configuration is currently required to make monitoring solutions (e.g., NAGIOS) aware of the topology of the York network. And with respect to managing, NetDisco automatically discovers assets placed on the network, but is unable to abstract away unnecessary complexity in, e.g., rendering schematics of the network topology. These and other examples suggest that NAGIOS and NetDisco operate in the realm of data, and possibly information, but are unable to envisage network topology from a knowledge-representation perspective. Thus the current focus is on applying a recently developed knowledge-representation platform to such routine requirements in network monitoring and management. The platform is based on Sematic Web standards and implementations and has already been proven effective in various scientific contexts. Ultimately our objective is to extract data automatically discovered by NetDisco, represent it using the knowledge-based platform, and transform a topology-aware representation of the data into configuration data that can be ingested by NAGIOS.

A visual representation of the approach is illustrated below.

CANHEIT 2008: York Involvement

York University will be well represented at CANHEIT 2008
Although you’ll find the details in CANHEIT’s online programme, allow me to whet your appetite regarding our contributions:

Is Desktop Software Dead?

When was the last time you were impressed by desktop software?

Really impressed?

After seeing (in chronological order) Steve Jobs, Al Gore and Tim Bray make use of Apple Keynote, I absolutely had to give it a try. And impressed I was – and to some extent, still am. For me, this revelation happened about a year ago. I cannot recall the previous instance – i.e., the time I was truly impressed by desktop software.

Although I may be premature, I can’t help but ask: Is desktop software dead?
A few data points:
  • Wikipedia states: “There is no page titled “desktop software”.” What?! I suppose you could argue I’m hedging my bets by choosing an obscure phrase (not!), but seriously, it is remarkable that there is no Wikipedia entry for “desktop software”!
  • Microsoft, easily the leading purveyor of desktop software, is apparently in trouble. Although Gartner’s recent observations target Microsoft Windows Vista, this indirectly spells trouble for all Windows applications as they rely heavily on the platform provided by Vista.
  • There’s an innovation’s hiatus. And that’s diplomatically generous! Who really cares about the feature/functionality improvements in, e.g., Microsoft Office? When was the last time a whole new desktop software category appeared? Even in the Apple Keynote example I shared above, I was impressed by Apple’s spin on presentation software. Although Keynote required me to unlearn habits developed through years of use Microsoft PowerPoint, I was under no delusions of having entered some new genre of desktop software.
  • Thin is in! The bloatware that is modern desktop software is crumbling under its own weight. It must be nothing short of embarrassing to see this proven on a daily basis by the likes of Google Docs. Hardware vendors must be crying in their beers as well, as for years consumers have been forced to upgrade their desktops to accommodate the latest revs of their favorite desktop OS and apps. And of course, this became a negatively reinforcing cycle, as the hardware upgrades masked the inefficiencies inherent in the bloated desktop software. Thin is in! And thin, these days, doesn’t necessarily translate to a penalty in performance.
  • Desktop software is reaching out to the network. Despite efforts like Microsoft Office Online, the lacklustre results speak for themselves. It’s 2008, and Microsoft is still playing catch up with upstarts like Google. Even desktop software behemoth Adobe has shown better signs of getting it (network-wise) with recent entres such as Adobe Air. (And of course, with the arrival of Google Gears, providers of networked software are reaching out to the desktop.)

The figure below attempts to graphically represent some of the data points I’ve ranted about above.

In addition to providing a summary, the figure suggests:

  • An opportunity for networked, Open Source software. AFAIK, that upper-right quadrant is completely open. I haven’t done an exhaustive search, so any input would be appreciated.
  • A new battle ground. Going forward, the battle will be less about commercial versus Open Source software. The battle will be more about desktop versus networked software.

So: Is desktop software dead?

Feel free to chime in!

To Do for Microsoft: Create a Wikipedia entry for “desktop software”.

QoS On My Mind …

QoS has been on my mind lately.
Why?
I suppose there are a number of reasons.
We’re in the process of re-architecting our data network at York. We’re starting off by adding redundancy in various ways, and anticipate the need to address QoS in preparing for our future deployment of a VoIP service.
Of course, that doesn’t mean we don’t already have VoIP or VoIP-like protocols already present on our existing undifferentiated network. In addition to Skype, there are groups that have already embraced videoconferencing solutions that make use of protocols like RTP. And given that there’s already a Top 50 list of Open Source VoIP applications to choose from, I’m sure these aren’t the only examples of VoIP-like applications on our network. 
At the moment, I have more questions about QoS than answers. 
For example:
  • If we introduce protocol-based QoS, won’t this provide any application using the protocol access to a differentiated QoS? I sense that QoS can be applied in a very granular fashion, but do I really want to turn my entire team of network specialists into QoS specialists? (From an operational perspective, I know I can’t afford to!)
  • When is the right time to introduce QoS? Users are clamoring for QoS ASAP, as it’s often perceived as a panacea – a panacea that often masks the root cause of what really ails them … From a routing and switching perspective, do we wait for tangible signs of congestion, before implementing QoS? I certainly have the impression that others managing Campus as well as regional networks plan to do this. 
  • And what about standards? QoS isn’t baked into IPv4, but there are some implementations that promote interoperability between vendors. Should MPLS, used frequently in service providers’ networks, be employed as a vehicle for QoS in the Campus network context? 
  • QoS presupposes that use is to be made of an existing network. Completely segmenting networks, i.e., dedicating a network to a VoIP deployment, is also an option. An option that has the potential to bypass the need for QoS. 
I know that as I dig deeper into the collective brain trust answers, and more questions, will emerge. 
And even though there are a number of successful deployments of VoIP that can be pointed to, there still seems to be a need to have a deeper discussion on QoS – starting from a strategic level. 
As I reflect more and more on QoS I’m thinking that a suitably targeted BoF, at CANHEIT 2008 for example, might provide a fertile setting for an honest discussion.     

Net@EDU 2008: Key Takeaways

Earlier this week, I participated in the Net@EDU Annual Meeting 2008: The Next 10 Years.   For me, the key takeaways are:

  • The Internet can be improved. IP, its transport protocols (RTP, SIP, TCP and UDP), and especially HTTP, are stifling innovation at the edges – everything (device-oriented) on IP and everything (application-oriented) on the Web. There are a number of initiatives that seek to improve the situation. One of these, with tangible outcomes, is the Stanford Clean Slate Internet Design Program.
  • Researchers and IT organizations need to be reunited. In the 1970s and 1980s, these demographics worked closely together and delivered a number of significant outcomes. Beyond the 1990s, these group remain separate and distinct. This separation has not benefited either group. As the manager of a team focused on operation of a campus network who still manages to conduct a modest amount of research, this takeaway resonates particularly strongly with me. 
  • DNSSEC is worth investigating now. DNS is a mission-critical service. It is often, however, an orphaned service in many IT organizations. DNSSEC is comprised of four standards that extend the original concept in security-savvy ways – e.g., they will harden your DNS infrastructure against DNS-targeted attacks. Although production implementation remains a future, the time is now to get involved.
  • The US is lagging behind in the case of broadband. An EDUCAUSE blueprint details the current situation, and offers a prescription for rectifying it. As a Canadian, it is noteworthy that Canada’s progress in this area is exceptional, even though it is regarded as a much-more rural nation than the US. The key to the Canadian success, and a key component of the blueprint’s prescription, is the funding model that shares costs evenly between two levels of government (federal and provincial) as well as the network builder/owner. 
  • Provisioning communications infrastructures for emergency situations is a sobering task. Virginia Tech experienced 100-3000% increases emergency-communications-panel-netedu-021008_2004.png in the demands on their communications infrastructure as a consequence of their April 16, 2007 event. Such stress factors are exceedingly difficult to estimate and account for. In some cases, responding in real time allowed for adequate provisioning through a tremendous amount of collaboration. Mass notification remains a challenge. 
  • Today’s and tomorrow’s students are different from yesterday’s. Although this may sound obvious, the details are interesting. Ultimately, this difference derives from the fact that today’s and tomorrow’s students have more intimately integrated technology into their lives from a very young age.
  • Cyberinfrastructure remains a focus. EDUCAUSE has a Campus Cyberinfrastructure Working Group. Some of their deliverables are soon to include a CI digest, plus contributions from their Framing and Information Management Focus Groups. In addition to the working-group session, Don Middleton of NCAR discussed the role of CI in the atmospheric sciences. I was particularly pleased that Middleton made a point of showcasing semantic aspects of virtual observatories such as the Virtual Solar-Terrestrial Observatory (VSTO).
  • The Tempe Mission Palms Hotel is an outstanding venue for a conference. Net@EDU has themed its annual meetings around this hotel, Tempe, Arizona and the month of February. This strategic choice is delivered in spades by the venue. From individual rooms to conference food and logistics to the mini gym and pool, The Tempe Mission Palms Hotel delivers. 

img_2462.jpg