CANHEIT 2008: Update on Semantic Topologies Presentation

As I blog, CANHEIT 2008 is winding down …

And although my entire presentation will soon appear online at the conference’s Web site, I thought I’d share here an updated version of the approach image shared previously.

As you’ll see from the presentation, this work is now progressing well. There should be more to share soon.

CANHEIT 2008: Enhanced Abstract

The program specifics for CANHEIT 2008 are becoming available online.
The enhanced abstract for one of my presentations is as follows:

From the Core to the Edge: Automating Awareness of Network Topology through Knowledge Representation

Ian Lumb – Manager Network Operations, Computing and Network Services (York University)


Like many other institutions of higher education, York University makes extensive use of Open Source software. This is especially true in the case of monitoring and managing IP (Internet Protocol) devices. On the monitoring front, extensive manual configuration is currently required to make monitoring solutions (e.g., NAGIOS) aware of the topology of the York network. And with respect to managing, NetDisco automatically discovers assets placed on the network, but is unable to abstract away unnecessary complexity in, e.g., rendering schematics of the network topology. These and other examples suggest that NAGIOS and NetDisco operate in the realm of data, and possibly information, but are unable to envisage network topology from a knowledge-representation perspective. Thus the current focus is on applying a recently developed knowledge-representation platform to such routine requirements in network monitoring and management. The platform is based on Sematic Web standards and implementations and has already been proven effective in various scientific contexts. Ultimately our objective is to extract data automatically discovered by NetDisco, represent it using the knowledge-based platform, and transform a topology-aware representation of the data into configuration data that can be ingested by NAGIOS.

A visual representation of the approach is illustrated below.

CANHEIT 2008: York Involvement

York University will be well represented at CANHEIT 2008
Although you’ll find the details in CANHEIT’s online programme, allow me to whet your appetite regarding our contributions:

Annotation Modeling: To Appear in Comp & Geosci

What a difference a day makes!
Yesterday I learned that my paper on semantic platforms was rejected.
Today, however, the news was better as a manuscript on annotation modeling was
accepted for publication.
It’s been a long road for this paper:

The abstract of the paper is as follows:

Annotation Modeling with Formal Ontologies:
Implications for Informal Ontologies

L. I. Lumb[1], J. R. Freemantle[2], J. I. Lederman[2] & K. D.
[1] Computing and Network Services, York University, 4700 Keele Street,
Toronto, Ontario, M3J 1P3, Canada
[2] Earth & Space Science and Engineering, York University, 4700 Keele
Street, Toronto, Ontario, M3J 1P3, Canada
Knowledge representation is increasingly recognized as an important component of any cyberinfrastructure (CI). In order to expediently address scientific needs, geoscientists continue to leverage the standards and implementations emerging from the World Wide Web Consortium’s (W3C) Semantic Web effort. In an ongoing investigation, previous efforts have been aimed towards the development of a semantic framework for the Global Geodynamics Project (GGP). In contrast to other efforts, the approach taken has emphasized the development of informal ontologies, i.e., ontologies that are derived from the successive extraction of Resource Description Format (RDF) representations from eXtensible Markup Language (XML), and then Web Ontology Language (OWL) from RDF. To better understand the challenges and opportunities for incorporating annotations into the emerging semantic framework, the present effort focuses on knowledge-representation modeling involving formal ontologies. Although OWL’s internal mechanism for annotation is constrained to ensure computational completeness and decidability, externally originating annotations based on the XML Pointer Language (XPointer) can easily violate these constraints. Thus the effort of modeling with formal ontologies allows for recommendations applicable to the case of incorporating annotations into informal ontologies.

I expect the whole paper will be made available in the not-too-distant future …

Evolving Semantic Frameworks into Platforms: Unpublished ms.

I learned yesterday that the manuscript I submitted to HPCS 2008 was not accepted 😦
It may take my co-authors and I some time before this manuscript is revised and re-submitted.
This anticipated re-submission latency, along with the fact that we believe the content needs to be shared in a timely fashion, provides the motivation for sharing the manuscript online.
To whet your appetite, the abstract is as follows:

Evolving a Semantic Framework into a Network-Enabled Semantic Platform
A data-oriented semantic framework has been developed previously for a project involving a network of globally distributed scientific instruments. Through the use of this framework, the semantic expressivity and richness of the project’s ASCII data is systematically enhanced as it is successively represented in XML (eXtensible Markup Language), RDF (Resource Description Formal) and finally as an informal ontology in OWL (Web Ontology Language). In addition to this representational transformation, there is a corresponding transformation from data into information into knowledge. Because this framework is broadly applicable to ASCII and binary data of any origin, it is appropriate to develop a network-enabled semantic platform that identifies the enabling semantic components and interfaces that already exist, as well as the key gaps that need to be addressed to completely implement the platform. After briefly reviewing the semantic framework, a J2EE (Java 2 Enterprise Edition) based implementation for a network-enabled semantic platform is provided. And although the platform is in principle usable, ongoing adoption suggests that strategies aimed at processing XML via parallel I/O techniques are likely an increasingly pressing requirement.

Browser Wars Revisited: Safari vs. Firefox?

Seemingly not to be out-done by all the buzz surrounding Firefox 3, Apple today (March 18, 2008) released version 3.1 of its Safari Web browser.
Apparently, we’ll love Safari because:

  • It’s fast – Up to 3x Firefox 2 on page loads and 4.5x on JavaScript execution. And although that’s impressive, performance is definitely coming across as one of Firefox 3’s core competencies. It’d be interesting to run the same tests with Safari 3.1 and even Firefox 3 Beta 4.
  • The UI – Of course. However, this is another area where Firefox 3 has made significant headway. Even on a Mac, Firefox 3’s UI is also elegant and clean. (For an amusing take on the Apple UI paradigm, have a look at this Eric Burke cartoon. I’m not sure how Burke would represent the Mozilla UI … However, one thing’s for sure, it’s become a lot more elegant and cleaner over the years.)
  • Find – The Firefox 3 implementation looks remarkably like Safari’s.
  • Resizable text areas – Excellent. Not sure if Firefox 3 has this.

Safari 3.1 also presents a twofold irony with respect to Web standards:

  1. You need to do a little digging (page 8 of the Safari Product Overview) to determine what is meant by Web standards support. And once you do, you’ll learn that it relates to CSS, HTML 5 and SVG. Of these, “HTML 5 offline storage support” has the potential to be most interesting, as Google is analogously demonstrating with Google Gears. So, it’s ironic you need to dig for something that has such value.
  2. In it’s support of HTML 5, we have a commercial entity (Apple) leading the way in terms of implementing standards. This is refreshing in general, and in particular in Apple’s case, as traditional expectations would have the Open Source implementations (e.g., Firefox) ahead in this regard. To quote Alanis Morissette: “Isn’t it ironic… don’t you think?”

When you factor in support for Windows, and apparently frequent releases, it’s no wonder that Safari is gaining momentum at the expense of Microsoft Internet Explorer (IE) and Firefox.
And not that I’ve been following developments with IE, but one has to wonder, if we are to re-visit the IE vs. Netscape browser wars of yesteryear, might the combatants this time be Apple Safari 3.x and Mozilla Firefox 3.x?
One can only hope!

QoS On My Mind …

QoS has been on my mind lately.
I suppose there are a number of reasons.
We’re in the process of re-architecting our data network at York. We’re starting off by adding redundancy in various ways, and anticipate the need to address QoS in preparing for our future deployment of a VoIP service.
Of course, that doesn’t mean we don’t already have VoIP or VoIP-like protocols already present on our existing undifferentiated network. In addition to Skype, there are groups that have already embraced videoconferencing solutions that make use of protocols like RTP. And given that there’s already a Top 50 list of Open Source VoIP applications to choose from, I’m sure these aren’t the only examples of VoIP-like applications on our network. 
At the moment, I have more questions about QoS than answers. 
For example:
  • If we introduce protocol-based QoS, won’t this provide any application using the protocol access to a differentiated QoS? I sense that QoS can be applied in a very granular fashion, but do I really want to turn my entire team of network specialists into QoS specialists? (From an operational perspective, I know I can’t afford to!)
  • When is the right time to introduce QoS? Users are clamoring for QoS ASAP, as it’s often perceived as a panacea – a panacea that often masks the root cause of what really ails them … From a routing and switching perspective, do we wait for tangible signs of congestion, before implementing QoS? I certainly have the impression that others managing Campus as well as regional networks plan to do this. 
  • And what about standards? QoS isn’t baked into IPv4, but there are some implementations that promote interoperability between vendors. Should MPLS, used frequently in service providers’ networks, be employed as a vehicle for QoS in the Campus network context? 
  • QoS presupposes that use is to be made of an existing network. Completely segmenting networks, i.e., dedicating a network to a VoIP deployment, is also an option. An option that has the potential to bypass the need for QoS. 
I know that as I dig deeper into the collective brain trust answers, and more questions, will emerge. 
And even though there are a number of successful deployments of VoIP that can be pointed to, there still seems to be a need to have a deeper discussion on QoS – starting from a strategic level. 
As I reflect more and more on QoS I’m thinking that a suitably targeted BoF, at CANHEIT 2008 for example, might provide a fertile setting for an honest discussion.     

Net@EDU 2008: Key Takeaways

Earlier this week, I participated in the Net@EDU Annual Meeting 2008: The Next 10 Years.   For me, the key takeaways are:

  • The Internet can be improved. IP, its transport protocols (RTP, SIP, TCP and UDP), and especially HTTP, are stifling innovation at the edges – everything (device-oriented) on IP and everything (application-oriented) on the Web. There are a number of initiatives that seek to improve the situation. One of these, with tangible outcomes, is the Stanford Clean Slate Internet Design Program.
  • Researchers and IT organizations need to be reunited. In the 1970s and 1980s, these demographics worked closely together and delivered a number of significant outcomes. Beyond the 1990s, these group remain separate and distinct. This separation has not benefited either group. As the manager of a team focused on operation of a campus network who still manages to conduct a modest amount of research, this takeaway resonates particularly strongly with me. 
  • DNSSEC is worth investigating now. DNS is a mission-critical service. It is often, however, an orphaned service in many IT organizations. DNSSEC is comprised of four standards that extend the original concept in security-savvy ways – e.g., they will harden your DNS infrastructure against DNS-targeted attacks. Although production implementation remains a future, the time is now to get involved.
  • The US is lagging behind in the case of broadband. An EDUCAUSE blueprint details the current situation, and offers a prescription for rectifying it. As a Canadian, it is noteworthy that Canada’s progress in this area is exceptional, even though it is regarded as a much-more rural nation than the US. The key to the Canadian success, and a key component of the blueprint’s prescription, is the funding model that shares costs evenly between two levels of government (federal and provincial) as well as the network builder/owner. 
  • Provisioning communications infrastructures for emergency situations is a sobering task. Virginia Tech experienced 100-3000% increases emergency-communications-panel-netedu-021008_2004.png in the demands on their communications infrastructure as a consequence of their April 16, 2007 event. Such stress factors are exceedingly difficult to estimate and account for. In some cases, responding in real time allowed for adequate provisioning through a tremendous amount of collaboration. Mass notification remains a challenge. 
  • Today’s and tomorrow’s students are different from yesterday’s. Although this may sound obvious, the details are interesting. Ultimately, this difference derives from the fact that today’s and tomorrow’s students have more intimately integrated technology into their lives from a very young age.
  • Cyberinfrastructure remains a focus. EDUCAUSE has a Campus Cyberinfrastructure Working Group. Some of their deliverables are soon to include a CI digest, plus contributions from their Framing and Information Management Focus Groups. In addition to the working-group session, Don Middleton of NCAR discussed the role of CI in the atmospheric sciences. I was particularly pleased that Middleton made a point of showcasing semantic aspects of virtual observatories such as the Virtual Solar-Terrestrial Observatory (VSTO).
  • The Tempe Mission Palms Hotel is an outstanding venue for a conference. Net@EDU has themed its annual meetings around this hotel, Tempe, Arizona and the month of February. This strategic choice is delivered in spades by the venue. From individual rooms to conference food and logistics to the mini gym and pool, The Tempe Mission Palms Hotel delivers. 


    Parsing XML: Commercial Interest

    Over the past few months, a topic I’ve become quite interested in is parsing XML. And more specifically, parsing XML in parallel.

    Although I won’t take this opportunity to expound in any detail on what I’ve been up to, I did want to state that this topic is receiving interest from significant industry players. For example, here are two data points:

    Parsing of XML documents has been recognized as a performance bottleneck when processing XML. One cost-effective way to improve parsing performance is to use parallel algorithms and leverage the use of multi-core processors. Parallel parsing for XML Document Object Model (DOM) has been proposed, but the existing schemes do not scale up well with the number of processors. Further, there is little discussion of parallel parsing methods for other parsing models. The question is: how can we improve parallel parsing for DOM and other XML parsing models, when multi-core processors are available?

    Intel Corp. released a new software product suite that is designed to enhance the performance of XML in service-oriented architecture (SOA) environments, or other environments where XML handling needs optimization. Intel XML Software Suite 1.0, which was announced earlier this month, provides libraries to help accelerate XSLT, XPath, XML schemas and XML parsing. XML performance was found to be twice that of open source solutions when Intel tested its product …

    As someone with a vested interest in XML, I regard data points such as these as very positive overall.

    AGU Poster: Relationship-Centric Ontology Integration

    Later today in San Francisco, at the 2007 Fall Meeting of the American Geophysical Union (AGU), one of my co-authors will be presenting our poster entitled “Relationship-Centric Ontology Integration” (abstract).

    This poster will be in a session for which I was a co-convenor and described elsewhere.

    A PDF-version of the poster is available elsewhere (agu07_the_poster_v2.pdf).