Multi-Touch Computational Steering

About 1:35 into

Jeff Han impressively demonstrates a lava-lamp application on a multi-touch user interface.

Having spent considerable time in the past pondering the fluid dynamics (e.g., convection) of the Earth’s atmosphere and deep interior (i.e., mantle and core), Han’s demonstration immediately triggered a scientific use case: Is it possible to computationally steer scientific simulations via multi-touch user interfaces?

A quick search via Google returns almost 20,000 hits … In other words, I’m likely not the first to make this connection 😦

In my copious spare time, I plan to investigate further …

Also of note is how this connection was made: A friend sent me a link to an article on Apple’s anticipated tablet product. Since so much of the anticipation of the Apple offering relates to the user interface, it’s not surprising that reference was made to Jeff Han’s TED talk (the video above). Cool.

If you have any thoughts to share on multi-touch computational steering, please feel free to chime in.

One more thought … I would imagine that the gaming industry would be quite interested in such a capability – if it isn’t already!

Digital Terrain Mapping via LIDAR

From the purely scientific (ozone-column mapping, imaging hydrometeors in clouds) to commercial (on-board detection of clear air turbulence, CAT), my exposure to LIDAR applications has been primarily atmospheric.

Of course, other applications of LIDAR technology exist, and one of these is Digital Terrain Mapping (DTM).

Terra Remote Sensing Inc. is a leader in LIDAR-based DTM. Particularly impressive is their ability to perform surface DTM in areas of dense vegetation. As I learned at a very recent meeting of the Ontario Association of Remote Sensing (OARS), Terra has already found a number of very practical applications for LIDAR-based DTM.

Some additional applications that come to mind are:

  • DTM of urban canopies for atmospheric experiments – Terra has already mapped buildings for various purposes. The same approach could be used to better ground (sorry 😉 atmospheric experiments. For example, the boundary-layer modeling that was conducted for Joint Urban 2003 (JU03) employed a digitization of Oklahoma City. A LIDAR-based DTM would’ve made this an even-more realistic effort.
  • Monitoring the progress of Global Change in the Arctic – In addition to LIDAR-based DTM, Terra is also having some success characterizing surfaces based on LIDAR intensity measurements. Because open water and a glacier would be expected to have different DTM and intensity characteristics, Terra should also be able to monitor Global Change as nunataks are progressively transformed into traditional islands (land isolated and surrounded by open water). With the Arctic as a bellwether for Global Change, it’s not surprising that the nunatak-to-island transformation is getting attention.

Although my additional examples are (once again) atmospheric in nature, as Terra is demonstrating, there are numerous applications for LIDAR-based technologies.

Annotation Paper Submitted to HPCS 2007 Event

I’ve blogged and presented recently (locally and at an international scientific event) on the topic of annotation and knowledge representation.

Working with co-authors Jerusha Lederman, Jim Freemantle and Keith Aldridge, a written version of the recent AGU presentation has been prepared and submitted to the HPCS 2007 event. The abstract is as follows:

Semantically Enabling the Global Geodynamics Project:
Incorporating Feature-Based Annotations via XML Pointer Language (XPointer)

Earth Science Markup Language (ESML) is efficient and effective in representing scientific data in an XML-based formalism. However, features of the data being represented are not accounted for in ESML. Such features might derive from events, identifications, or some other source. In order to account for features in an ESML context, they are considered from the perspective of annotation. Although it is possible to extend ESML to incorporate feature-based annotations internally, there are complicating factors identified that apply to ESML and most XML dialects. Rather than pursue the ESML-extension approach, an external representation for feature-based annotations via XML Pointer Language (XPointer) is developed. In previous work, it has been shown that it is possible to extract relationships from ESML-based representations, and capture the results in the Resource Description Format (RDF). Application of this same requirement to XPointer-based annotations of ESML representations results in a revised semantic framework for the Global Geodynamics Project (GGP).

Once the paper is accepted, I’ll make a pre-submission version available online.

Because the AGU session I participated in has also issued a call for papers, I’ll be extending the HPCS 2007 submission in various interesting ways.

And finally, thoughts are starting to gel on how annotations may be worked into the emerging notions I’ve been having on knowledge-based heuristics.

Stay tuned.

Quantitative classification of cloud microphysical imagery via fractal dimension calculations

I recently referred to a paper I wrote for a Fractals in Engineering conference in the mid-90s:

I did lead a project at KelResearch where our objective was to classify hydrometeors (i.e., raindrops, snowflakes, etc.). The hydrometeors were observed in situ by a sensor deployed on the wing of an airplane. Data was collected as the plane flew through winter storms. (Many of these campaigns were spearheaded by Prof. R. E. Stewart.) What we attempted to do was automate the classification of the hydrometeors on the basis of their shape. More specifically, we attempted to estimate the fractal dimension of each observed hydrometeor in the hopes of providing at automated classification scheme. Although this was feasible in principle, the resolution offered by the sensor made this impractical.

I’ve now added the citation and paper to my publications list.

I expect to revisit this paper soon … stay tuned.

Genetic Aesthetics: Generative Software Meets Genetic Algorithms

I’m still reading Cloninger’s book, and just read a section on Generative Software (GS) – software used by contemporary designers to “… automate an increasingly large portion of the creative process.” As implied by the name, GS can produce a tremendous amount of output. It’s then up to the designer to be creatively stimulated as they sift through the GS output.

As I was reading Cloninger’s description, I couldn’t help but make my own connections with Genetic Algorithms (GAs). I’ve seen GAs applied in the physical sciences. For example, GAs can be used to generate models to fit data. The scientist provides an ancestor (a starting model), and then variations are derived through genetic processes such as mutation. Only the models with appropriate levels of fitness survive subsequent generations. Ultimately, what results is the best (i.e., most fit) model that explains the data according to the GA process.

In an analogous way, this is also what happens with the output from GS. Of course, in the GS case, it is the designer her/himself who determines what survives according to their own criteria.

The GS-GA connection is even stronger than my own association may cause you to believe.

In interviewing Joshua Davis for his book, Cloninger states:

At one point, you talked about creating software that would parse through the output of your generative software and select the iterations you were most likely to choose.

Davis responds:

That’s something [programmer] Branden Hall and I worked on called Genetic Aesthetic. It uses a neural network and genetic algorithms to create a “hot or not” situation. It says, “Rate this composition I generated on a scale from 1 to 10.” If I give it a 1, it says, “This isn’t beautiful. I should look at what kind of numbers were generated in this iteration and record those as unfavorable.” You have to train the software. Because the process is based on variables and numbers, over a very short period of time it’s able to learn what numbers are unsatisfactory and what numbers are satisfactory to that individual human critic. It changes per individual.

That certainly makes the GS-GA connection explicit and poetic, Genetic Aesthetic – I like that!

I’ve never worked with GAs. However, I did lead a project at KelResearch where our objective was to classify hydrometeors (i.e., raindrops, snowflakes, etc.). The hydrometeors were observed in situ by a sensor deployed on the wing of an airplane. Data was collected as the plane flew through winter storms. (Many of these campaigns were spearheaded by Prof. R. E. Stewart.) What we attempted to do was automate the classification of the hydrometeors on the basis of their shape. More specifically, we attempted to estimate the fractal dimension of each observed hydrometeor in the hopes of providing at automated classification scheme. Although this was feasible in principle, the resolution offered by the sensor made this impractical. Nonetheless, it was a interesting opportunity for me to personally explore the natural Genetic Aesthetics afforded by Canadian winter storms!

On Using Images to Google Images

In a recent Scientific American article (“A Farewell to Keywords”, July 2006), IT columnist Gary Stix provides an update on the idea of using images to find images.

Stix highlights work underway at Microsoft and Google. Both companies appear focused on geometric methods. For example, Microsoft’s approach is based on the spatial orientation of triplets of features. Feature triplets of the imaged being scrutinized are compared with feature triplets of training images in a database. Matching feature triplets constitutes a positive search hit.

Because today’s methods are based on image metadata (data about the image such as its filename, its type, associated annotations, etc.), this image-centric approach is definitely innovative, and presents interesting possibilities for application.

However, purely geometric schemes for Googling images is the pure-text analog of Googling keyword combinations. (Googling keyword combinations with Boolean expressions.) Why? Both approaches are semantically challenged. They do not allow for context to be conveyed.

Google and others are actively working on smarter (aka semantically richer and expressive) search engines. Although purely geometric methods for Googling images comprises an important first step, smarter methods will need to have a sematically solid basis.