What’s in Your SOAP Toolkit?

Your choice of SOAP toolkit may be the most important decision you make in implementing a Service Oriented Architecture (SOA) based on Web Services.

This wasn’t always the case.

For example, First-Generation Web Services (WS-1G) typically depicted SOAP, WSDL and UDDI as more-or-less equal players.

ws_triangle.png

In reality, however, UDDI was and remains (e.g., Erl, Chapter 3) a bit of a non-starter.

WSDL remains key, and continues to evolve with version 2 well on its way to becoming a bona fide standard.

So, what about SOAP?

In learning more about Second-Generation Web Services (WS-2G), I continue to be struck by how much value is being driven through SOAP. In turn then, WS-2G are being driven through XML, as SOAP makes use of XML. It is for reasons like this that SOA-guru Thomas Erl states that SOAs are ultimately all about XML and not Web Services (Erl, Chapter 3, Section 3.5.4).

And this returns us to the point made at the outset.

Because so much value is being driven through SOAP, you must choose your SOAP toolkit wisely. More specifically, toolkit choice will determine, for exanple, which WS-2G specifications are supported via implementations. Even more specifically, as a third-generation SOAP toolkit, Apache Axis2 includes core (e.g., WS-Addressing) and extended (e.g., WS-Coordination, WS-ReliableMessaging, WS-Security, etc.) implementations of a number of WS-2G standards.

SOAP toolkits may also reflect vendor bias. For example, IBM has championed WS-Notification, whereas Microsoft’s emphasis has been on WS-Eventing. These vendor biases at the standards level are quite likely reflected at the SOAP toolkit level. For example, it is reasonable to expect that an IBM SOAP toolkit would include an implementation of WS-Notification, whereas a Microsoft SOAP toolkit would offer one of WS-Eventing instead.

Even though working with SOAP may initially appear a low-level detail at the outset of conceptualizing a SOA, it turns to be a very important consideration that is amplified as SOA adoption proceeds.

Digital Terrain Mapping via LIDAR

From the purely scientific (ozone-column mapping, imaging hydrometeors in clouds) to commercial (on-board detection of clear air turbulence, CAT), my exposure to LIDAR applications has been primarily atmospheric.

Of course, other applications of LIDAR technology exist, and one of these is Digital Terrain Mapping (DTM).

Terra Remote Sensing Inc. is a leader in LIDAR-based DTM. Particularly impressive is their ability to perform surface DTM in areas of dense vegetation. As I learned at a very recent meeting of the Ontario Association of Remote Sensing (OARS), Terra has already found a number of very practical applications for LIDAR-based DTM.

Some additional applications that come to mind are:

  • DTM of urban canopies for atmospheric experiments – Terra has already mapped buildings for various purposes. The same approach could be used to better ground (sorry 😉 atmospheric experiments. For example, the boundary-layer modeling that was conducted for Joint Urban 2003 (JU03) employed a digitization of Oklahoma City. A LIDAR-based DTM would’ve made this an even-more realistic effort.
  • Monitoring the progress of Global Change in the Arctic – In addition to LIDAR-based DTM, Terra is also having some success characterizing surfaces based on LIDAR intensity measurements. Because open water and a glacier would be expected to have different DTM and intensity characteristics, Terra should also be able to monitor Global Change as nunataks are progressively transformed into traditional islands (land isolated and surrounded by open water). With the Arctic as a bellwether for Global Change, it’s not surprising that the nunatak-to-island transformation is getting attention.

Although my additional examples are (once again) atmospheric in nature, as Terra is demonstrating, there are numerous applications for LIDAR-based technologies.

HP Labs Innovates to Sustain Moore’s Law

I had the fortunate opportunity to attend a presentation by Dr. R. Stanley Williams at an HP user forum event in March 2000 in San Jose.

Subsequently, I ran across mention of his work at HP Labs in various places, including Technology Review.

So, when I read in PC World that

Hewlett-Packard researchers may have figured out a way to prolong Moore’s Law by making chips more powerful and less power-hungry

I didn’t immediately dismiss this as marketing hyperbole.

Because of Moore’s Law, transistor density has traditionally grabbed all the attention when it comes to next-generation chips. However power consumption, and the resulting heat generated, are also gating (sorry, I couldn’t resist that!) factors when it comes to chip design. This is why there is a well-established trend in the development of multicore chip architectures. With the multicore paradigm, it’s an aggregated responsibility to ensure that Moore’s Law is of ongoing relevance.

Williams has found a way to sustain the relevance of Moore’s Law without having to make use of a multicore architecture.

Working with Gregory S. Snyder, the HP team has redesigned the Field Programmable Gate Array (FPGA) by introducing a nano-scale interconnect (a field-programmable nanowire interconnect, FPNI). As hinted earlier in this posting, the net effect is to allow for significantly increased transistor density and reduced power consumption. A very impressive combination!

Although Sun executive Scott McNealy is usually associated with the aphorism “The network is the computer”, it may now be HP’s turn to recapitulate.

BlackBerry vs. iPhone: RIM Has First-Mover Advantage

I recently tied success in the imminent BlackBerry vs. iPhone struggle to market segmentation.

Of course, there’s also first-mover advantage.

Kudos to RIM for timing the release of the “Pearl White” version of the BlackBerry Pearl on the heels of a week’s worth of iPhone buzz.

I wonder how long it will be before someone makes the iPhone skin available on the “Pearl White” BlackBerry Pearl.

RIM has significant first-mover advantage and also the potential to capitalize on the iPhone marketing buzz.

Apple needs to move quickly and deliver a very solid 1.0 version of the iPhone, as they’re already in catch-up mode.

BlackBerry vs. iPhone: It’s All About Market Segments

The BlackBerry vs. iPhone buzz is getting louder.

Nadir Mohamed (COO for Rogers Canada) had this to say in a recent Globe and Mail interview:

Is RIM’s BlackBerry in trouble?

Their strength has been in push e-mail and their devices are very intuitive.

So from that perspective I don’t think the iPhone hits the core of their market.

When you have these multiple applications, whether camera, music, video, voice, data or e-mail, I think generally most devices have strength in a few of them and may have the others available. What RIM’s Pearl will be known for is probably different than what the iPhone will be known for. But we’re talking about a product that hasn’t been produced yet.

Translation: Success is contingent upon market segmentation.

Rogers Targeting Personalization and Mobility

Nadir Mohamed (COO for Rogers Canada) recently stated in a Globe and Mail interview that: “… the two big customer attributes we are building for are personalization and mobility.”

He was also asked about the GSM-based Apple iPhone and Rogers’ position:

What advantage does Rogers get from being the only Canadian carrier that uses the GSM format?

Eighty per cent of the world uses GSM. What it means is we have devices with incredible features, that are attractively priced, because we have the scale of 80 per cent of the world using these. And we get the products early.

I’m not saying whether we have agreements or anything [with Apple], but given the iPhone was launched on GSM, we’re in good position to reinforce that we’re the first and have the best-feature devices.

The upshot? Rogers is well placed with respect to delivering the iPhone to Canadian consumers, and iPhone is perfectly aligned with Rogers’ desire to target personalization and mobility.

Will I be trading in my Blackberry for an iPhone?

I love my BlackBerry. It does exactly what I expect it to do. After years of disappointment with technology, this is as strong an endorsement as I can think of.

I have the same feeling every time I use my Apple MacBook Pro. I can see my daughters having the same experience every time they use their Apple iPods.

Coming from this perspective, the anticipation I have for the Apple iPhone is nothing short of spine-tingling. It’s all anticipation at this point because all I know about the iPhone is what I can read online.

Of course, that won’t stop me from compiling a list of considerations on whether or not I will trade in my BlackBerry for an iPhone:

  • Physicality – RIM nailed the physical aspects of the Blackberry. Apple nailed the physical aspects of the MacBook and iPod, but what about the iPhone? For example, I’m concerned about trading in the highly tactile experience of my BlackBerry 7290’s real keypad for a touchscreen-based, soft keypad. I’ve had the soft-keypad experience via various Palm devices, and that’s precisely why I know I prefer the real keypad on the BlackBerry.
  • Footprint – RIM nailed device footprint. So did Palm. So did Apple with the iPod. In my estimation no handheld representation of a PC, based on some pared-down version of Windows (WindowsCE, aka. “WINCE”), even comes close. Device footprint is the cumulative effect of the operating system, applications, data, etc. In the case of the BlackBerry, Palm, or iPod, there is minimal bloat. The iPhone has to deliver a low-bloat device footprint. Although I like Apple’s chances here, the challenge will be significant as the iPhone is based on Apple OS X. It’s not clear whose CPU will be inside.
  • Propriety – According to one source:

    Apple has long preferred to develop products built on closed, proprietary technologies rather than open standards. Its proprietary iTunes music software, which will not work with devices other than Apple’s iPod, is one example of such a system.

    To some extent, of course, this is true. To a greater extent, however, it is a red herring.

As RIM has demonstrated with the BlackBerry, integration is the real issue. The BlackBerry is proprietary hardware. Because the operating system and applications are all J2ME-based, third parties can and do develop for the Blackberry platform, and RIM facilitates this. This is only the handheld portion of the picture, as integration with enterprise-scale messaging platforms (Microsoft Outlook, IBM Lotus Notes, etc.) is also key to the BlackBerry’s overall delivered value. Given that the iPhone is based on Apple Mac OS X, there are clearly prospects for integration.

  • Software
    • Office software – Like the Blackberry, office-productivity software is absent on the iPhone. Although this doesn’t mean you won’t be able to find such software for your iPhone, it does underscore the fact that office apps are not a focal point. From one perspective, this is an omission. From another, it is highly consistent with closing the expectation/experience gap I raised at the outset.
    • Chat software – RIM provides its own chat software (BlackBerry Messenger); it works well between BlackBerry’s. However, it’s the third-party chat applications that amplify the integration of the BlackBerry with enterprise-messaging systems (via the RIM BlackBerry Enterprise Messenger, IBM Sametime, etc.) or with Internet messaging systems (Yahoo! Messenger, GoogleTalk, etc.). Frankly, I’m surprised that some variant of iChat wasn’t included with the iPhone. Even from the non-business perspective, iChat would be a phenomenal way of further capturing the mindshare of the iPod generation that is currently umbilically tethered to MSN Messenger. I predict Apple will address this oversight before product release.
  • Legalities – The impending legal battle between Cisco and Apple is generating almost as much attention as the iPhone itself. As someone who lived through the RIM vs. NTP situation, while traveling extensively in the US, settlement of this legal matter will be a precondition of purchase.
  • Connectivity – I’ve used BlackBerry’s on CDMA and GSM-based cellular networks. With today’s expectation of IP everywhere, one wonders when an IP-ready version of the BlackBerry will become available. (Today, I only care when I run a Web browser on my BlackBerry.) The iPhone will grok both cellular and IP-based wireless networks on release. Even more, the iPhone is ready for next-generation wireless networks based on the emerging IEEE 802.11n standard. From the connectivity perspective then, the iPhone presents a phenomenal convergence play. RIM has less than six months to ensure it retains mindshare on this increasingly important front.

So, will I be trading in my BlackBerry for an iPhone?

It’s too early to say, but I’m definitely keen to learn more.

Annotation Paper Submitted to HPCS 2007 Event

I’ve blogged and presented recently (locally and at an international scientific event) on the topic of annotation and knowledge representation.

Working with co-authors Jerusha Lederman, Jim Freemantle and Keith Aldridge, a written version of the recent AGU presentation has been prepared and submitted to the HPCS 2007 event. The abstract is as follows:

Semantically Enabling the Global Geodynamics Project:
Incorporating Feature-Based Annotations via XML Pointer Language (XPointer)

Earth Science Markup Language (ESML) is efficient and effective in representing scientific data in an XML-based formalism. However, features of the data being represented are not accounted for in ESML. Such features might derive from events, identifications, or some other source. In order to account for features in an ESML context, they are considered from the perspective of annotation. Although it is possible to extend ESML to incorporate feature-based annotations internally, there are complicating factors identified that apply to ESML and most XML dialects. Rather than pursue the ESML-extension approach, an external representation for feature-based annotations via XML Pointer Language (XPointer) is developed. In previous work, it has been shown that it is possible to extract relationships from ESML-based representations, and capture the results in the Resource Description Format (RDF). Application of this same requirement to XPointer-based annotations of ESML representations results in a revised semantic framework for the Global Geodynamics Project (GGP).

Once the paper is accepted, I’ll make a pre-submission version available online.

Because the AGU session I participated in has also issued a call for papers, I’ll be extending the HPCS 2007 submission in various interesting ways.

And finally, thoughts are starting to gel on how annotations may be worked into the emerging notions I’ve been having on knowledge-based heuristics.

Stay tuned.

Knowledge-Based Heuristics: Further Research is Required

Recently, I’ve blogged about:

In both cases, there’s a case to be made for combining heuristic with knowledge-based approaches.

Although I did find “heuristics” and “knowledge” juxtaposed in Googling for “knowledge-based heuristics”, I believe the tightly coupled examples I’ve described above have some degree of novelty.

Further research is required 🙂

A Bayesian-Ontological Approach for Fighting Spam

When it comes to fighting spam, Bayesian and ontological approaches are not mutually exclusive.

They could be used together in a highly complimentary fashion.

For example you could use Bayesian approaches, as they are implemented today, to build a spam ontology. In other words, the Bayesian approach would be extended through the addition of knowledge-representation methods for fighting spam.

This is almost the opposite of the Wikipedia-based approach I blogged about recently.

In the Wikipedia-based approach, the ontology consists of ham-based knowledge.

In the alternative I’ve presented here, the ontology consists of spam-based knowledge.

Both approaches are technically viable. However, it’d be interesting to see which one actually works better in practice.

Either way, automated approaches for constructing ontologies, as I’ve outlined elsewhere, are likely to be of significant applicability here.

Another point is also clear: Either way, this will be a computationally intensive activity!