Rediscovering the `net effect

Even though I’ve been on the `net long enough to recall when Gopher was cool, I must confess that blogging recently made me feel like an Internet newbie all over again. Why? I once again become intoxicated by the `net effect.

It’s simple: I write a post and publish it. Within seconds, minutes, hours, days, I qualitatively and quantitatively experience the `net effect:

  • People view my blog entry via a reference from some other Web page
  • People view my blog entry via a search engine
  • People view my blog entry via a feed

And the WordPress software magically captures all of this ‘view data’, and makes it readily available to me. I find it intriguing to learn which posts generated the most traffic, or which search-engine terms caused someone to arrive at my blog entry. Very intoxicating indeed!

Of course, this `net effect is a well known phenomena that is quantified by Metcalfe’s Law and its successors. (Complex networks are even more intriguing!)

Network theory aside, there’s nothing like a little personal experience, a little personal validation, to remind one of the `net effect.

One more observation: The very first thesis of The Cluetrain Manifesto states that “Markets are conversations”. Reflect on the Open Source movement for a few femtoseconds and you’ll totally ‘get’ this thesis. Because I completely agree with this thesis, I have to ask: Why aren’t marketers everywhere adopting blogging like white on rice? (I’m not suggesting that this isn’t already happening, just that I’m surprised it hasn’t reached exponential proportions!)

Automating the Creation of Ontologies from the Bottom Up

In an interesting entry on the Semantic Web, a fellow blogger writes:

However, the Semantic Web, which is still in a development phase where researchers are trying to define the best and most usable design models, would require the participation of thousands of knowledgeable people over time to produce those domain-specific ontologies necessary for its functioning.

I’ve italicized part of this quote, as it requires comment.

This is a common perception – i.e., that people need to be directly involved in the manual creation of formal ontologies.

Frankly, this simply isn’t the case.

Vehicles such as GRDDL are already available and designed to automate the creation of informal ontologies from the bottom up. More specifically, GRDDL facilitates the extraction of RDF from XML.

I’ve spelled all of this out via a scientific example elsewhere.

Fortunately, you don’t need to take my word on this – you can go to the source: a Tim Berners-Lee keynote at a recent BioTech event.

The Open Grid Forum: Necessary … but Sufficient?

[Note: A revised and expanded version of this entry has appeared in GRIDtoday.]

The Global Grid Forum (GGF) and the Enterprise Grid Alliance (EGA) announced yesterday (June 26, 2006) their merger to form the Open Grid Forum. Many will regard this as a very positive development. More importantly, this is a very necessary development. Why? At this juncture, Grid Computing has minimal ability to tolerate fragmentation in standards. In fact, Grid Computing sorely needs to deliver outcomes that matter – and this will only come from focus … in standards and elsewhere.

Working for Grid software vendor Platform Computing, Inc. for about seven years, it took ‘a little distance’ for me to appreciate ‘all of this’ – i.e., and to be perfectly blunt, Grid Computing isn’t quite as big a deal as many would like it to be.

Even though it won’t make this unpleasant pill any easier to swallow, allow me to elaborate via a few anecdotal data points:

  • Despite convergence efforts such as the Open Grid Services Architecture (OGSA), Web Services continues to significantly eclipse Grid Computing. And for those who’ve delved into the reeds a little deeper, the ‘evolution’ of the Open Grid Services Infrastructure (OGSI) into the Web Services Resource Framework (WSRF) serves only to amplify this perception. Extrapolating further suggests the possibility for ‘collateral damage’ – e.g., as the gap between the promise and reality of the Semantic Web decreases, and Web Services increasingly plays a facilitative role, the Semantic Grid runs the risk of being not much more than a footnote.
  • Despite their ‘validation’ of Grid Computing, through their endorsement of the Open Source Globus Toolkit, IBM markets more around virtualization these days … and virtualization (think vmWare, Xen, etc.), along with Web Services, is also proving to be a big deal.
  • With the notable exception of Platform Computing and perhaps a small handfull of others, Grid Computing startups are struggling to land customers and, frankly, survive – this is also in stark contrast to those companies that make Web Services or virtualization their business. Moreover, a ‘supply chain’ continues to gel around Web Services (e.g., just Google “Web Services”) and virtualization (e.g., PlateSpin) – a supply chain that features start ups having compelling value propositions.
  • The highest-profile demonstrations of Grid Computing run the risk of trivializing Grid Computing. It may seem harsh to paint illustrations such as the World Community Grid as technologically trivial, but let’s face it, this is not the most sophisticated demonstration of Grid Computing. Equally damaging are those clustered applications (e.g., Oracle 10g) that market themselves as ‘Grid-enabled applications’.
  • Applications can be effectively Grid-enabled by drawing on non-GGF or non-EGA standards. Scali Manage 5 provides a compelling illustration by drawing on Web Based Enterprise Management (WBEM) and Eclipse; whereas WBEM is an umbrella for a number of standards all under the auspicies of the Distributed Management Task Force (DMTF), Eclipse is an implementation framework and platform available from a consortium.

I remain a Grid Computing enthusiast, but as a realistic enthusiast I believe that “Grid Computing sorely needs to deliver outcomes that matter”.

That being the case, yesterday’s creation of the Open Grid Forum is necessary, but is it sufficient?

Tech Transfer: From Outer Space to Medicine!

One of the last prospects I worked with while at Scali, Inc. was Bartron Medical Imaging, LLC. Bartron has recently become a Scali customer. Although each and every Scali customer was interesting to me, Bartron was both interesting and unusual. Why? Bartron has applied in a medical context image-analysis software originally developed by NASA. Even more impactful, the medical application by Bartron is aiding in, for example, the early detection of breast cancer. Not only is this is spectacular exemplar of technology transfer, it is also a technique that may have significant human impact. Bartron’s future has the potential to be very exciting and potentially very meaningful.

It’s all about bandwidth – isn’t it?

It's all about bandwidth – isn't it?

After spending the past two days at the recent ORION summit, that's the feeling I have: It is all about bandwidth.

It's not that there's anything wrong with that … it's just that there's so much more to networks than just bandwidth. And of course, this is especially true for so-called 'advanced networks' like ORION.

Using George Gilder's Laws of the Telecosm as a point of reference, I ranted (published version, complete original version) about this 'bandwidth fixation' and concluded:

• Infinite bandwidth is overrated
• Smart pipes rule

Why?

Compelling possibilities arise from being able to isolate and manipulate discrete wavelengths of the EM spectrum across intelligent networks.

These days, such compelling possibilities are being routinely demonstrated via ORION, CANARIE and numerous other advanced networks. CANARIE even placed this capability into the hands of mere mortals:

User Controlled Light Paths (UCLP) software allows end-users, either people or sophisticated applications, to treat network resources as software objects and provision and reconfigure lightpaths within a single domain or across multiple, independently managed, domains. Users can also join or divide lightpaths and hand off control and management of these larger or smaller private sub-networks to other users.

There's no question that substantial bandwidth is definitely required by some applications.

However, 'smart pipes', 'intelligent infrastructure', UCLP, all suggest that real-time bandwidth management – i.e., treating the network as a bona fide resource – is also compelling.