Schedule At A Glance (tentative)
Dec. 5th (Monday) | Dec. 6th (Tuesday) | Dec. 7th (Wednesday) | Dec. 8th (Thursday) | |||
Time | Regular | Regular | Short paper session | Regular | Tutorial | Workshop |
9:00-10:40 | Keynote 1 | Papers 4 (Human Factors and Medical Applications) | Short Papers 1 (Telepresence) | Keynote 2 | Tutorial 1 & Tutorial 2 | Full Day Free Workshop (9:00 - 17:00) |
10:40-10:50 | Break | Break | Break | Break | ||
10:50-12:30 | Papers 1 (Modeling) | Papers 5 (Displays) | Short Papers 2 (Input/Output) | Papers 7 (Mixed Reality 2) | Tutorial 1 & Tutorial 2 |
|
12:30-14:00 | Lunch | Lunch | Lunch | Lunch | ||
14:00-15:40 | Papers 2 (Input/Output) | Demos + Posters | Papers 8 (Haptics) | Tutorial 3 & Tutorial 4 |
||
15:40-16:00 | Break | Break | Break | Break | ||
16:00-17:40 | Papers 3 (Mixed Reality 1) | Papers 6 (Art & Entertainment) | Short Papers 3 (Modeling & Human Factors) | Papers 9 (Applications) | Tutorial 3 & Tutorial 4 |
|
18:30-21:30 | Dinner |
Advance Program
Dec. 5th (Monday)
9:00-10:40 Keynote 1 | |
Spatial Augmented Reality | |
Oliver Bimber, Bauhaus University Weimar, Germany | |
Abstract Novel approaches have taken augmented reality beyond traditional eye-worn or hand-held displays - enabling additional application areas. New display paradigms exploit large spatially aligned optical elements, such as mirror beam-splitters, transparent screens or holograms, as well as video-projectors. Thus, we call this technological variation "Spatial Augmented Reality (SAR)". In many situations, SAR displays are able to overcome technological and ergonomic limitations of conventional AR systems. Due to the fall in cost and availability of projection technology, personal computers and graphics hardware, there has been a considerable interest in exploiting SAR systems in universities, research laboratories, museums, industry and in the art community. This talk will present state-of-the-art concepts, details about hardware and software implementations, and current areas of application. It draws parallels between displays techniques used for virtual reality and augmented reality and stimulate thinking about the alternative approaches for AR. |
10:50-12:30 Papers 1 (Modeling) | |
Free Hand Stroke Based Virtual Sketching, Deformation and Sculpting of NURBS Surface | |
Han-wool Choi, Hee-joon Kim, Jeong-in Lee, and Young-Ho Chai | |
Detail Sculpting using Cubical Marching Squares | |
Chien-Chang Ho, Cheng-Han Tu, and Ming Ouhyoung | |
BRDF Estimation System for Structural Colors | |
Ryo Shimada, and Yoichiro Kawaguchi |
14:00-15:40 Papers 2 (Input/Output) | |
Distributed Autonomous Interface using ActiveCube for Interactive Multimedia Contents | |
Ryoichi Watanabe, Yuichi Itoh, Yoshifumi Kitamura, Fumio Kishino, and Hideo Kikuchi | |
Shoe-shaped Interface for Inducing a Walking Cycle | |
Junji Watanabe, Hideyuki Ando, and Taro Maeda | |
Semantic 3D Object Manipulation using Object Ontology in Multimodal Interaction Framework | |
Sylvia Irawati, Daniela Calderón, and Heedong Ko | |
Coeno - Enhancing Face-to-Face Collaboration | |
Michael Haller, Mark Billinghurst, Daniel Leithinger, Jakob Leitner, and Thomas Seifried |
16:00-17:40 Papers 3 (Mixed Reality 1) | |
AR Registration by Merging Multiple Planar Markers at Arbitrary Positions and Poses via Projective Space | |
Yuko Uematsu, and Hideo Saito | |
Toward Immersive Telecommunication : 3D Video Avatar with Physical Interaction | |
Sang-Yup Lee, Ig-Jae Kim, Sang C. Ahn, Myo-Taeg Lim, and Hyoung-Gon Kim | |
Visualization Methods for Outdoor See-Through Vision | |
Takahiro Tsuda, Haruyoshi Yamamoto, Yoshinari Kameda, and Yuichi Ohta | |
Self-Aware Framework for Adaptive Augmented Reality | |
Eduardo Veas, Kiyoshi Kiyokawa, and Haruo Takemura |
Dec. 6th (Tuesday)
9:00-10:40 Papers 4 (Human Factors and Medical Applications) | |
Visual Perception Modulated by Galvanic Vestibular Stimulation | |
Naohisa Nagaya, Maki Sugimoto, Hideaki Nii, Michiteru Kitazaki, and Masahiko Inami | |
Visual-motor Adaptation to Stabilize Perceptual World: Its Generality and Specificity | |
Michiteru Kitazaki, and Akira Shimizu | |
Development of VR-STEF System with Force Display Glove System | |
Ken'ichi Koyanagi, Yuki Fujii, and Junji Furusho | |
Visual Support for Medical Communication by using Projector-Based Augmented Reality and Thermal markers | |
Jeremy Bluteau, Itaru Kitahara, Yoshinari Kameda, Haruo Noma, Hiyoshi Kogure, and Yuichi Ohta |
9:00-10:40 Short Papers 1 (Telepresence) | |
Telepresence and User-initiated Control | |
Daniel Kerse, Holger Regenbrecht, and Martin Purvis | |
An AR System for Haptic Communication | |
Jongeun Cha, Ian Oakley, Junhun Lee, and Jeha Ryu | |
Tankwar - AR Games at GenCon Indy 2005 | |
Trond Nilsen | |
Virtually Enhancing the Perception of User Actions | |
Patrick Horain, José Marques Soares, Piyush Kumar Rai, and André Bideau | |
Video Videos in Space: A study on Presence in Video Mediating Communication Systems | |
Aimèe Hills, Jörg Hauber, and Holger Regenbrecht |
10:50-12:30 Papers 5 (Displays) | |
Augmented Telexistence in Smart Space | |
Ig-Jae Kim, Sang-Yup Lee, Sang Chul Ahn, and Hyoung-Gon Kim | |
Real World Video Avatar : Real-time and Real-size Transmission and Presentation of Human Figure | |
Tomohiro Tanikawa, Yasuhiro Suzuki, Koichi Hirota, and Michitaka Hirose | |
Study of Saccade-incident Information Display using Saccade Detection Device | |
Hideyuki Ando, Junji Watanabe, Tomohiro Amemiya, and Taro Maeda | |
VR Content Platform for Multi-Projection Displays with Realtime Image Adjustment | |
Takafumi Koike, Kei Utsugi, and Michio Oikawa |
10:50-12:30 Short Papers 2 (Input/Output) | |
The Tangible Augmented Street Map | |
Antoni Moore and Holger Regenbrecht | |
A Design of Cell-based Pin-Array Tactile Display | |
Heesook Shin, Misook Sohn, and Junseok Park | |
Combining Passive Haptics with Redirected Walking | |
Luv Kohli, Eric Burns, Dorian Miller, and Henry Fuchs | |
Annotation Authoring in Collaborative 3D Virtual Environments | |
Rieko Kadobayashi, Julian Lombardi, Mark P. McCahill, Howard Stearns, Katsumi Tanaka, and Alan Kaye | |
OSGARToolKit: Tangible + Transitional 3D Collaborative Mixed Reality Framework | |
Raphael Grasset, Julian Looser, and Mark Billinghurst |
14:00-15:40 Demos + Posters | |
Prototype Application with Electromyogram Interface in Immersive Virtual Environment | |
Hideaki Touyama, Koichi Hirota, and Michitaka Hirose | |
An Emotion Model Using Emotional Memory and Consciousness Ooccupancy Ratio | |
Sung June Chang, and In Ho Lee | |
Age Invaders | |
Eng Tat Khoo, Shang Ping Lee, and Adrian David Cheok | |
Internet.PajamaInternet.Pajama | |
James Teh, Shang Ping Lee, and Adrian David Cheok | |
The Reduction of Mental Strain Using with the Visual Sign in Virtual Environment | |
Hiroshi Watanabe, Wataru Teramoto, Hiroyuki Umemura, and Katsunori Matsuoka | |
Appearance Based Prosthetic Eye | |
Naho Inamoto, Takeo Kanade, and Hideo Saito | |
AirGrabber: Virtual Keyboard using Miniature Infrared Camera and Tilt Sensor | |
Masataka Imura, Masahiro Fujimoto, Yoshihiro Yasumuro, Yoshitsugu Manabe, and Kunihiro Chihara | |
Auditory Saliency Captures Visual Timing: Effect of Luminance | |
Takuro Kayahara | |
A Surface Acoustic Wave Tactile Display on Phantom | |
Masaya Takasaki, Takeo Sakurada, Hiroyuki Kotani, and Takeshi Mizuno | |
Gaze Tracking System Using Single Camera and Purkinje Image | |
Jinwoo Park, Yong-Moo Kwon, and Kwanghoon Sohn | |
The Interactive Virtual Showcase: A Four User Display for Museums | |
Hendrik Wendler, and Bernd Fröhlich | |
Head Mounted Display with Peripheral Vision | |
Jin-uk Baek, Jaehoon Jung, and Gerard J. Kim | |
A BCI based Virtual Control Testbed for Motion Disabled People | |
Jayoung Goo, Dongjun Suh, Hyun Sang Cho, Kyoung S. Park, and Minsoo Hahn | |
A Simplified Hand Gesture Interface for Spherical Manipulation in Virtual Environments | |
Jong Seo Lee, Seung Eun Lee, Sun Yean Jang, and Kyoung Shin Park | |
Constructing a Physical Layer of Virtual Cities for Disaster Mitigation | |
Ping Zhu, Yozo Fujino, Muneo Hori, and Junji Kiyono | |
Interaction System Based On Virtual Candle Light | |
Takashi Yagi, Shouji Sakamoto, Tetsuo Shoji, Yasuhiko Watanabe, and Yoshihiro Okada | |
Tuning Viewing Parameters for Efficient and Accurate 3D Interaction | |
Dong Wook Lee, and Jinah Park | |
Multi-projection based Hemispherical Display | |
Gun A. Lee, Hyun Kang, Dong-Sik Jo, and Wookho Son |
16:00-17:40 Papers 6 (Art & Entertainment) | |
Raw Emotional Signalling, via Expressive Behaviour | |
Anthony Brooks, and Eva Petersson | |
GPU-based 3D Oriental Color-Ink Rendering | |
Crystal S. Oh, and Yang Hee Nam | |
The Immersant Experience of Osmose and Ephémère | |
Harold Thwaites | |
Sonic Panoramas: Experiments with Interactive Landscape Image Sonification | |
Eric Kabisch, Falko Kuester, and Simon Penny |
16:00-17:40 Short Papers 3 (Modeling & Human Factors) | |
Depth-Image Based Full 3D Modeling Using Trilinear Interpolation and Distance Transform | |
Seung-man Kim, Jeung-chul Park, and Kwan H. Lee | |
Immediate Creation of User Centric Interaction Model | |
Ungyeon Yang, Wookho Son, and Gerard Junghyun Kim | |
Mesh Based 3D Shape Deformation for Image Based Rendering from Uncalibrated Multiple Views | |
Satoshi Yaguchi, and Hideo Saito | |
Estimation of Few Light Sources from Environment Maps for Fast Realistic Rendering | |
Naveen Dachuri, Seung Man Kim, and Kwan H. Lee | |
The Influence of Room Structure on the Perceived Direction of Up in Immersive Visual Displays | |
H. L. Jenkin, R. T. Dyde, M. R. Jenkin, and L. R. Harris |
Dec. 7th (Wednesday)
9:00-10:40 Keynote 2 | |
Mixed Reality and Human Centered Media for Social and Physical Interactive Computer Entertainment | |
Adrian Cheok, Mixed Reality Lab, Nanyang Technological University, Singapore | |
Abstract This talk outlines new facilities within ubiquitous human media spaces supporting embodied interaction between humans and computation both socially and physically with the aim of novel interactive computer entertainment. We believe that the current approach to developing electronic based entertainment environments is somewhat lacking with regard to support for multi-person multi-modal interactions. In this paper, we present an alternative ubiquitous computing environment based on an integrated design of real and virtual worlds. We discuss some different research prototype systems such as the Magic Land, Virtual Kyoto Garden, Age Invaders, Poultry Internet, Tilt-Pad, and the Human Pacman. The functional capabilities implemented in these systems include spatially-aware 3D navigation, tangible interaction, and ubiquitous human media spaces. Some of its details, benefits, and issues regarding design support are discussed. |
10:50-12:30 Papers 7 (Mixed Reality 2) | |
Virtual Object Manipulation using a Mobile Phone | |
Anders Henrysson, Mark Billinghurst, and Mark Ollila | |
Texture Overlay onto Deformable Surface for Virtual Clothing | |
Jun Ehara, and Hideo Saito | |
Occlusion Detection of Real Objects using Contour Based Stereo Matching | |
Kenichi Hayashi, Hirokazu Kato, and Shogo Nishida | |
A Vision-Based AR Registration Method Utilizing Edges and Vertices of 3D Model | |
Ryo Hirose, and Hideo Saito |
14:00-15:40 Papers 8 (Haptics) | |
Analytic Determination of the Tension Capable Workspace of Cable Actuated Haptic Interfaces | |
Emmanuel Brau, Jean Paul Lallemand, and Florian Gosselin | |
Phantom-DRAWN: Direction Guidance using Rapid and Asymmetric Acceleration Weighted by Nonlinearity of Perception | |
Tomohiro Amemiya, Hideyuki Ando, and Taro Maeda | |
Development of Five-Fingered Haptic Interface: HIRO II | |
Haruhisa Kawasaki, Tetsuya Mouri, M. Osama Alhalabi, Yasutaka Sugihashi, Yoshio Ohtuka, Sho Ikenohata, Kazushige Kigaku, Vytautas Daniulaitis, Kazuyasu Hamada, and Tatsuo Suzuki |
16:00-17:40 Papers 9 (Applications) | |
COSMOS: A VR-Based Proof-of-Concept Interface for Advanced Space Robot Control | |
Jean-Francois Lapointe | |
A Generic Virtual Reality Software System's Architecture and Application | |
Frank Steinicke, Timo Ropinski, and Klaus Hinrichs | |
A Constrained Road-Based VR Navigation Technique for Travelling in 3D City Models | |
Timo Ropinski and Frank Steinicke |
18:30-21:30 Dinner |
Dec. 8th (Thursday)
9:00-12:30 Tutorials 1 | |
Spatial Augmented Reality | |
Oliver Bimber (Faculty of Media, Bauhaus University Weimar, Germany) | |
Novel approaches have taken augmented reality beyond traditional eye-worn or hand-held displays - enabling additional application areas. New display paradigms exploit large spatially aligned optical elements, such as mirror beam-splitters, transparent screens or holograms, as well as video-projectors. Thus, we call this technological variation "Spatial Augmented Reality (SAR)". In many situations, SAR displays are able to overcome technological and ergonomic limitations of conventional AR systems. Due to the fall in cost and availability of projection technology, personal computers and graphics hardware, there has been a considerable interest in exploiting SAR systems in universities, research laboratories, museums, industry and in the art community. Parallels to the development of virtual environments from head-attached displays to spatial projection screens can be clearly drawn. We believe that an analog evolution of augmented reality has the potential to yield a similar successful factor in many application domains. Thereby, SAR and body-attached AR are not competitive, but complementary. This seminar will present state-of-the-art concepts, details about hardware and software implementations, and current areas of application. It draws parallels between displays techniques used for virtual reality and augmented reality and stimulate thinking about the alternative approaches for AR. Covered Topics:
This tutorial is appropriate for beginners in digital art and media. No programming or specific mathematical background is required. General knowledge of basic computer graphics techniques, 3D tools and optics is helpful but not necessary.
Course Material and Further Information:
|
9:00-12:30 Tutorials 2 | |
Creating Ubiquitous Mobile Solutions | |
(TBD) |
14:00-17:40 Tutorials 3 | |
Novel Input Devices and Multi-Viewer Display Technology for VR | |
Bernd Froehlich (Faculty of Media, Bauhaus University Weimar, Germany) | |
This tutorial presents a variety of input devices for controlling three-dimensional graphics applications. We will also introduce a scheme for classifying these devices and show how to systematically explore the design space using this scheme. An example is the six degree of freedom (DOF) GlobeFish device, which allows natural separation of translational and rotational input. Our user tests confirm that the GlobeFish performs better in a 3D docking task than commercially available 6 DOF desktop devices. Other devices for two-handed use with large projection-based environments are discussed. Some of these devices provide twelve or more degrees of freedom and allow quasi-simultaneous navigation and manipulation without explicit mode changes. The second part of the tutorial discusses solutions to the problem of providing multiple tracked viewers with individual stereoscopic images, which has always been a major challenge for VR systems. One of the most promising approaches overlays the images of multiple LCD projectors on top of each other and shutters the projectors and users' eyes in sync. Such systems support co-located interaction between multiple users and require new approaches to the design of devices and interaction techniques.
Short Biography: |
14:00-17:40 Tutorials 4 | |
Next Generation Computer Entertainment | |
Adrian Cheok (Mixed Reality Lab, Nanyang Technological University, Singapore) |
9:00-17:00 Full Day Free Workshop | |
International Workshop on Advanced Processing for Ubiquitous Networks | |
See http://www.ozawa.ics.keio.ac.jp/coews/ for more details |