About 1:35 into
Jeff Han impressively demonstrates a lava-lamp application on a multi-touch user interface.
Having spent considerable time in the past pondering the fluid dynamics (e.g., convection) of the Earth’s atmosphere and deep interior (i.e., mantle and core), Han’s demonstration immediately triggered a scientific use case: Is it possible to computationally steer scientific simulations via multi-touch user interfaces?
A quick search via Google returns almost 20,000 hits … In other words, I’m likely not the first to make this connection 😦
In my copious spare time, I plan to investigate further …
Also of note is how this connection was made: A friend sent me a link to an article on Apple’s anticipated tablet product. Since so much of the anticipation of the Apple offering relates to the user interface, it’s not surprising that reference was made to Jeff Han’s TED talk (the video above). Cool.
If you have any thoughts to share on multi-touch computational steering, please feel free to chime in.
One more thought … I would imagine that the gaming industry would be quite interested in such a capability – if it isn’t already!
The gaming industry has, in my view, 3 different kinds of tech’s they use right now; Controller based (be it a console or keyboard/mouse), Motion based (Wii), and Touch based (iPhone/iTouch).
Controller based has been the standard since before the first Nintendo Entertainment System, and can be traced back to Pong. It has been a very successful model, but as Jeff did mention, it’s coming to the age where the interface will conform to us.
Motion based started off shaky, but as Nintendo has proven, the Wii has outsold its competitors by far, appealing to a more ‘casual’ audience. Children, older folks, and people that just game ‘casually’ (different definitions to different people) has a much larger population base than the ‘hardcores’. From this success, Sony and Microsoft have now announced their own motion based hardware, from the Sony ‘Arc’ to Microsoft’s ‘Project Natal’.
Touch has become immensely popular, and if not the sole, then the majority of, credit belongs to Apple. Their API for creating touch based games on their iTouch and iPhone has spawned thousands of developing companies, and from a financial standpoint, Apple is doing extremely well. These games sell millions of dollars worth per year, and typically priced at under $10. As Jeff mentioned, the Apple Tablet has sparked a lot of conversation about how to use this new tool for ourselves, in other areas of interest.
In interest of keeping this comment short, in this stage of development, I do not see this touch overlay being used too widely in gaming, due to a few factors. The foremost would be financial; these overlay screens are not cheap. The second would be development; all games that have come to pass, and all games in development from now until (likely) 2011/2012 are being coded to their controller/motion based codecs. To just scrap all that work and start anew for a fresh, narrow margin of users, would not be very worthwhile to the developers.
The third would be, simply, that UIs are still almost required in games by default. There are just too many reasons NOT to exclude a UI. We’re all going to be Tom Cruise from Minority Report, dancing around in our living rooms to execute commands to our gaming avatars, before I see us tapping away on a virtual screen.
Thanks for your input from the gaming perspective, Shaun!
Perhaps the gaming industry’s UI dichotomy – traditional/motion interfaces for portable/fixed platforms whereas touch for mobile/handheld – will persist for a while …