Report by Jerry Isdale
HRL Laboratories LLC
|Other Reviews||Location||Opening Reception|
|Vendor Exhibits||Low Cost Immersive VR||Emerging Technology|
|Art Gallery||Studio||Birds Of A Feather (BOFs)|
Reports by a sall group of volunteers
World Day1 Day2
Mostly on NVidia and other graphics chips, but with a bit on the rest of wonderland.
There is not a lot of tourist stuff around San Antonio, however the Alamo is a short walk away. Engulfed by a big city, it certainly looks different than what I recall from the movies. The building everyone recognizes as the Alamo is actually the chapel and was a small side building at the time of the battle.
||Diana Lee, Secretary 2001-2001 LA ACM Siggraph, gets her face painted and then a hug from Howard Neely (HRL Labs & LA Siggraph)|
One interesting new (?) thing on the floor - several vendors had pen & paper drawing areas setup, sometimes with live models. Some had a contest for images that included the company logo (nVidia). Nice low tech appeal to the hordes of artists wandering the floor.
Intersense - showing off their wireless IS900 setups in several places. There were several single screen projection displays that used a set of IS900 trackers around the frame. Not well covered was the SoniWing, part of their SimTracker configuration. This single piece fixed frame is intended for cockpit and similar small area tracking. It was on display in the FCS Robotics booth along with the Haptic Master robotic arm. Intersense also has a set of 3 SoniStrips fixed in a rigid array that can be used for small areas. The IS900 is an excellent 6DOF tracker, based on my experience with it here at HRL. The entry price is about $17k (US).
Pixelogic was demonstrating their ZBrush product. Their ZSpheres modeling tool caught my eye, having recently seen BLUSculpt. ZSphere allows fairly sophisticated modeling based on spheres. It would be interesting to try such a tool in an immersive environment - of couse the UI would need radical modification from its current WIMP form.
nVidia was showing their current graphics hardware, letting out only a little on the next gen NV30 GPU (Ace's Hardware has some info leaked after the show). They had some impressive demos of real time lighting/shading on XBox type hardware. They also had some courses on their new GPU language Cg. I did pick up rumors that genlock would be featured on the NV30 Quadro boards, but the nVidia folks would only say it would be on some future card. Check out Toms' Hardware's review of Siggraph for more on nVidia
Panoram Technologies showed their new "Digital Imaging Table", a flat 65" display with resolution of 2300x1850 (0.53mm dot pitch). uses four autocalibrating projectors with seamless edge blending. Olympus provides autocalibration hardware/software. The image was viewable from extreme off angles and looked fantastic - bright, sharp, good color. It does not yet support motion images (animation, video, etc). Cost is about $300k, so dont get your hopes up. The autocalibration and edge blending technology might be applicable to a large wall tiled display. Olympus rep at the booth was open to discussing such applications.
Actuality Systems, Inc.showed their Perspecta® Spatial 3D Platform - one of the spinning surface projection devices.
Z Corp was one of several 3D printer companies exhibiting at the show. The Z406 is a full color ink jet based technology. Models can be built using several colors, including images on the final surface. They are offering one free model printout from your CAD files. They also support VRML and several GIS formats. Their display included one 8" square terrain model with topo-colors.
For 3D Inc was showing a single point stereoscopy techinque that converts a single 2d video stream into stereo pairs in real time (3 frame delay, if I recall correctly). It uses motion of objects & camera between frames to create depth effect. The technique can be integrated with virtual environments for video textures, etc. Interesting idea.
ATI introduced their Fire-GL X1 boards using the FGL 9700 VPU (Visual Processing Unit), aimed at the Digital Content Creation (DCC) and CAD markets. It can do a fair job in real time work too.
Other vendors are covered quite well by the other reviews.
Monday, Dave Pape gave a talk in the Educators Program on "Building Affordable Projective Immersive Displays". Notes for the course are available on his web site. Dave presented cost breakdowns for single screen passive stereo systems using PC rendering. His estimates show a mid-range price of about $20kUS. The major costs are the projectors and trackers. Software is another issue - Dave's systems use VRCO's software, which can carry a fair license fee. Trackers have lots of issues with interference, resolution, latency and area of coverage. Cost of a good low latency wide area tracker like the IS900 can be more than $20k US.
Projectors are a big issue. Immersive systems used to use CRT projectors, which have lots of configuration parameters to correct for distortions, alignment, registration,etc. However most companies have stopped making them - except for high end simulator purposes. DLP projectors have taken over the market - unfortunately most of these do not have the correction capabilities need to do good stereo image registration. Christie Digital and Barco both have DLP projectors for active stereo, but they cost more than Dave's estimate for the entire setup. Passive Stereo generally requires two projectors and has its own alignment issues. (Note: Christie's Mirage series can do both active and passive stereo - for $50k-100k US) Passive (polarized) stereo can not be used with mirrors, as they tend to disrupt the polarization. This means the throw distance from projector to screen can be quite large. I did hear rumors of some low cost projector folks working on short throw lenses for passive displays. If you can avoid stereo (and lose some of the immersiveness), you can save a LOT - single projectors, folded optical path, no genlock requirement, etc.
Tuesday, there was a full day course on "Commodity Clusters for Immersive Projection Environments". The course provided a good overview of the issues and technologies required for cluster based IPEs, as well as some live demonstrations in the Siggraph CAL. Most multi-screen IPEs have used large multi-pipe SGI systems. Cluster IPEs use distributed rendering on a collection of lower cost PC based systems. These are not the conventional computing "clusters" like the Beowulf architecture. They generally use conventional ethernet interconnects and some form of distributed, synchronized rendering. The course organizers are responsible for three open source software projects of note: Syzygy, NetJuggler, and DICElib. Syzygy is a fairly new project (v0.4) that looks very promising. It offers several different distribution architectures. NetJuggler is a distributed version of VRJuggler. It provides coordination for multiple versions of a VRJuggler application running on seperate systems. The VRJuggler team is implementing cluster support in the next version of their project. A subproject of NetJuggler is the SoftGenLock package. This project addresses the lack of "GenLock" on lower cost graphics board. GenLock refers to the ability to synchronize the display of frames on multiple devices. It is critical for active stereo displays. While a standard feature of high end products (eg. 3d Labs Wildcat, Quantum3d), it is missing from commodity graphics boards from nVidia, ATI, Matrox, etc. SoftGenLock uses a fast communications channel (parallel port) and software register tweaking to implement genlock capability for these cards on Linux systems. It works - sorta. I've had some difficulties getting it to work well with my installation, but that is still a work in progress. I heard from several people that the next version of the nVidia Quadro boards will support genlock - something about expiring agreements with SGI. Hopefully these boards will be out by early 2003. DICELib is a less abitious project, concentrating more on the simulation synchronization and distributed application control. Overall the course was a good introduction to the issues of commodity IPE. I hope the organizers can find a way to put their presentation online somewhere.
There was a recent announcement of a Workshop on Commodity-Based Visualization Clusters in conjunction with IEEE Visualization 2002, Sunday, October 27, 2002 (Boston, MA, USA). It should a great follow-on to the topic.
Yesterday: Mapping the E-Tech Continuum; John M Fujii,
A poster mapping of ten years of Siggraph Emerging Technology exhibits onto a 2D spiral galaxy shape.. Two 'theme' variables are assigned to each exhibit. The primary theme is mapped to a spoke on the spiral, secondary theme collects exhibits within the primary. It took me a few moments to figure it out and find previous projects. Interesting to see correlation within a cluster, although I'm not sure about the assignment of themes. Also it is not easy to see relations where primary/secondary are reversed. i.e. my Time Table of History (1991) was given a primary theme of Education with hypermedia as secondary theme. There are two other projects in this cluster. How many Hypermedia/Education?
|A New Step-in-Place Locomotion Interface for Virtual Environment
With Large Display System; Laroussi Bouguila, Tokyo Institute of Technology
A front-projection based display with small (pressure sensative?) turntable on which participant stands and walks. Environment is a 2d maze. Small steps required, turning your feet reorients the v-world briefly then the turntable rotates back. It worked pretty well as long as you dont walk too fast. I was able to run through the maze walls with quick steps.
|ARS BOX with Palmist - Advanced VR-System based on commodity hardware;
Horst Hörtner, Ars Electronica Futurelab
Three channel, rear projection passive stereo display system with wireless Compaq iPaq for control. NVidia graphics boards on Linux computers, synchronization accomplished by the ArsSyncBox - derived from SoftGenLock. Rendering worked pretty well for several sequences. A tunnel simulation with odd black and white textures had significant problems with swimming textures when the eye point changed. This was a rendering bug that the artist turned into a feature.
|Block Jam; Henry Newton-Dunn, Sony Design Lab
. A tangible interface to music synthesizer. Arranging and clicking on blocks controls sequencing. An intial musical phrase is inserted at an end block and 'transmitted' through the chain. Corners and end points reflect the sounds. It looked like an interesting collaborative game, although the processing of each block was quite limited. It might be fun for longer if there were more options available (at the cost of a more complex interface?)
|Distributed Systems of Self-reconfiguring Robots; Zack Butler, et
al, Dartmouth College
Small robots that join together for locomtion through reconfiguring the cluster. Two types of bots were shown, only one form pictured here. Bots were assembled from parts built using 3D CAD printers. They moved kinda slow, but its hard to get both speed and torque from small motors. Funded by NSF, not DARPA.
|Focus Plus Context Screens: Visual Context and Immersion on the
Desktop; Patrick Baudisch, Xerox PARC
An LCD panel inset in front projected display provides high resolution detail within larger, low res image. It worked pretty nicely, although user's shadow could be disconcerting. No intereaction was demonstrated while I was watching.
|Immersive and Interactive Rear-Projected Stereo DLP Reality Center;
Andrew Joel, Barco Simulation Products
Three channel rear projection, curved screen, active stereo display. Basic wide projection wall. The new & unique technology was the use of special warping hardware/software to support active stereo and edge blending for DLP projectors. A similar system was shown last year, perhaps an earlier version? It still had very noticable banding where projectors overlapped, especially when viewed off axis (sweet spot?).
|Lewis the Robotic Photographer; Cindy Grimm, Washington University
in St. Louis
Large mobile trash can with camera on top that wandered about trying to take pictures of people. Used a fixed IR light source to position itself within pre-defined area. Uses flesh tones, eye shape detection in images with legs detected by sonar range finder. Seemed to do a fair job taking snap shot quality pictures. Of more interest was how people interacted with the bot.
|NONA-Vision; Hiroo Iwata, et al University of Tsukuba
System used nine cameras/displays to achieve semi-immersive telepresence. Motion of an array of cameras was slaved to an array of displays moved by participant. Cameras were pointed at a small diarama of toys, however the audience were quite visible and interacted with both the diarama and the driver. Interesting idea, but the two distinct view targets was disconcerting. It may have been more effective if only the audience or only the toys were visible.
|Occlusive Optical See-through Displays in a Collaborative Setup;
Optical see-through AR displays generally cannot show occlusion by real world objects. This HMD uses a complex set of optics to provide occlusion. It works but with a highly limited field of view and range. The wearer's hands and other objects are sensed by a set of cameras
|Physiological Reaction and Presence in Stressful Virtual Environments;
Meehan,et al, UNC
This is the classic presense experimental Pit room from UNC. Wearing a HMD, tracker and physiological monitors, a patron is asked to interact with real and virtual objects in a training room and then asked to walk through a door onto a ledge 20 ft above a living room. The idea is to use physiological reactions to measure the sense of presence. Real objects in the training room, which the patron sees before doning the HMD, and a real 1" platform ledge around the pit room greatly enhance the experience. A fair percentage of patrons refuse to step through the door. I knew a lot about the project before trying it, yet I definately reacted when asked to step to the edge and drop a ball down. Part of my reaction was due to the lack of an avatar for my feet - when I looked down to find the edge of the ledge, I could not see my feet as I slid them forward to find the edge. That was very disconcerting, although adding trackers for feet would be technically and logistically difficult. There is already a significant teather carried by two assistants. This installation was a real experiment, adding significantly to the number of total subjects. It did require relaxing the rules for human experiments a bit - usually they exclude people who have had alcohol in the last 24hrs, are sleep deprived or under stress. Those would eliminate almost everyone attending Siggraph. The experiment had significant loggistics, requiring an small army of grad students and it still took a signficant amount of time to get people through..
|Public Anemone: An Organic Robot Creature, Cynthia Breazeal
et al, MIT Media Lab
Mostly a robotics effort which was supposed to be responsive to its audience. Some of the touch sensitive robots had some reaction, but otherwise it didnt seem very responsive.
|Regeneration of Real Objects in the Real World; Hiroto Matsuoka,
et al, NTT Lifestyle and Environmental Technology Laboratories
An AR-toolkit based project using tablet PC with a camera attached to the back. Virtual objects appear in several forms over the fiducial targets depending on distance, angle, etc. I liked the effect, but more impressive was the 3d capture system collecting geometry and digital images of highly interreflective objects like glass, porcelain, etc.
|SmartFinger : Nail-Mounted Tactile Display; Hideyuki Ando, et
al University of Tokyo
A voice coil, LED and photodetector attached to the fingernail allows haptic sensing of images. It worked ok as a quick demo when moved fairly quickly over high contrast black and white lines, etc. It also worked on images visible only under ultraviolet light. A more robust, higher resolution version with user training might have applications for the visually impaired.
|The Virtual Showcase: A Projection-Based Multi-User Augmented Reality
Display; Oliver Bimber et al
An inverted, truncated glass pyramid encases a real object, while graphics are projected up onto the sides of the pyramids. Multiple view angles are supported by the different pyramid sides. It was a pretty effective device with applications for museums, etc.
|TWISTER: A Media Booth; Kenji Tanaka et al Tachi Lab, University
Large drum of spinning LED arrays creates a full 360 deg surround, auto-stereoscopic display for a patron willing to stand inside (trust the techies, it wont fly apart or chop you up :-). This version (#3) was built especially to travel to Siggraph. Future versions will have higher resolution (much needed) and perhaps cameras to allow for mutual telexistence portals. However, they probably wont travel much.
|Ultrasound Visualization with the Sonic Flashlight; Damion Shelton,
George Stetton, Wilson Chang
Attach a half-silvered mirror and display to an ultrasound wand and you have x-ray vision (ok, ultrasonic vision but you get the idea). Nice idea, decent execution, but it needed a better demonstration. I saw it early Monday (1st day) and they were using water balloons as the target objects. Only one balloon had anything other than water inside, so it was hard to get the idea. Looking at my hand was a bit better. Unfortunately, there were no pregnant women around.
|Virtual Chanbara, Daijiro Koga
A virtual samurai sword fighting game with a force feedback 'sword' device - the GEKI2. It uses a pair of spinning wheels are abruptly stopped to simulate collisions of the virtual sword with the opponent or his sword. It works, but the delay while the wheels spin up again after each impulse adds an unnatural feel. I also had some problems getting the wheel caught up in the HMD teather.
There was one VR related exhibit - "Body Language User Interface", or BLUIsculpt ™ from University of Alaska at Fairbanks/Artic Research Supercomputer Center. This is a semi-immersive voxel based modeling tool. The Studio version used a single 10ft rear projection stereo display (Christie Digital's polarized active/passive projector), with an Intersense IS900 tracker. Virtual sculpting is an interesting idea - Sensable's Freeform product does it nicely with haptic display. I'm not sure the voxel technology of BLUISculpt works as well. Perhaps an sphere controlled interface similar to Pixelogic's ZSpheres might work better.
There was a lively discussion of the ability to read different scene graph formats. Generally, every scene graph package implements its own file format that closely reflects the internal data structures and design decisions. Reading a non-native format exposes all the differences and can be a difficult problem. Project Lodestone (or perhaps this link) is an open source attempt to create an API to address the problem. Lodestone would provide basic readers for formats and the scene graphs would implement a layer on top of Lodestone API to map its nodes into the native format. This seems to be to be akin to the XML SAX call back system. There was some disagreement about the utility of Lodestone, probably stemming from the different views of a translation API and an interchange file format. VMRL and X3D might provide a common, scene graph agnositic (??) interchange file format. Lodestone would read those files as well as other formats. While not a scene graph expert, I am not sure that the current Lodestone approach is best. The callback style API has a fair number of deficiencies, especially when there may be confusion or a design mismatch in how the graph nodes affect each other. A DOM style API would read the specific scene graph into a neutral format document graph which could then be traversed at will by the translation level.
Another controversial point was brought up by Steve Baker of PLib fame - that hardware shaders and the new crop of shader languages (Cg, OpenGL 2.0, etc.) will render conventional scene graphs oboslete. Steve bases this on his experience with the difficulty of handling the messy details of texture/material nodes in different scene graph systems. The SGI representatives noted that they are working on incorporating portions of OpenGL Shader (and other SGI Open** libraries) into Open Performer. An interesting direction for shaders appeared in the Siggraph paper "Shader-Driven Compilation of Rendering Assets" by Paul Lalonde and Eric Schenk of Electronic Arts (Canada) Inc. It does not mention scene graphs directly, but discusses techniques to compile an interactive world description including shaders for use on a variety of game platforms.
Don Brutzman distributed the latest X3D cdrom. X3D is an XML interchange format for virtual environment that supports tags derived from GeoVRML.
Mike McCann gave a presentation on MBARI's use of GeoVRML
worlds to provide access to and visualization of data from their underwater
remote operated vechicles (ROV). It is an excellent use of the technology
for educational and scientific application.
Courses, Papers, panels... There were a LOT of courses this year
- presentations from most of which are on the course cdrom. Amost more
courses (59) than papers (67!). I'm quite pleased to have the cdrom
of both. Hopefully ACM will continue to provide these for full conferences
attendees. I did not attend any of the papers/panel sessions this
year. I was hoping to catch them online.