Report by Jerry Isdale
HRL Laboratories LLC
Other Reviews | Location | Opening Reception |
Vendor Exhibits | Low Cost Immersive VR | Emerging Technology |
Art Gallery | Studio | Birds Of A Feather (BOFs) |
Offical Siggraph
Reports
Reports by a sall group of volunteers |
CG
World Day1 Day2
Day3
Day4
|
Extreme
Tech
|
VRefresh.org
|
Toms'
Hardware
Mostly on NVidia and other graphics chips, but with a bit on the rest of wonderland. |
There is not a lot of tourist stuff around San Antonio, however the Alamo
is a short walk away. Engulfed by a big city, it certainly looks
different than what I recall from the movies. The building everyone
recognizes as the Alamo is actually the chapel and was a small side building
at the time of the battle.
One interesting new (?) thing on the floor - several vendors had pen & paper drawing areas setup, sometimes with live models. Some had a contest for images that included the company logo (nVidia). Nice low tech appeal to the hordes of artists wandering the floor.
Intersense - showing off their wireless IS900 setups in several places. There were several single screen projection displays that used a set of IS900 trackers around the frame. Not well covered was the SoniWing, part of their SimTracker configuration. This single piece fixed frame is intended for cockpit and similar small area tracking. It was on display in the FCS Robotics booth along with the Haptic Master robotic arm. Intersense also has a set of 3 SoniStrips fixed in a rigid array that can be used for small areas. The IS900 is an excellent 6DOF tracker, based on my experience with it here at HRL. The entry price is about $17k (US).
Pixelogic was demonstrating their ZBrush product. Their ZSpheres modeling tool caught my eye, having recently seen BLUSculpt. ZSphere allows fairly sophisticated modeling based on spheres. It would be interesting to try such a tool in an immersive environment - of couse the UI would need radical modification from its current WIMP form.
nVidia was showing their current graphics hardware, letting out only a little on the next gen NV30 GPU (Ace's Hardware has some info leaked after the show). They had some impressive demos of real time lighting/shading on XBox type hardware. They also had some courses on their new GPU language Cg. I did pick up rumors that genlock would be featured on the NV30 Quadro boards, but the nVidia folks would only say it would be on some future card. Check out Toms' Hardware's review of Siggraph for more on nVidia
Panoram Technologies showed their new "Digital Imaging Table", a flat 65" display with resolution of 2300x1850 (0.53mm dot pitch). uses four autocalibrating projectors with seamless edge blending. Olympus provides autocalibration hardware/software. The image was viewable from extreme off angles and looked fantastic - bright, sharp, good color. It does not yet support motion images (animation, video, etc). Cost is about $300k, so dont get your hopes up. The autocalibration and edge blending technology might be applicable to a large wall tiled display. Olympus rep at the booth was open to discussing such applications.
Actuality Systems, Inc.showed their Perspecta® Spatial 3D Platform - one of the spinning surface projection devices.
Z Corp was one of several 3D printer companies exhibiting at the show. The Z406 is a full color ink jet based technology. Models can be built using several colors, including images on the final surface. They are offering one free model printout from your CAD files. They also support VRML and several GIS formats. Their display included one 8" square terrain model with topo-colors.
For 3D Inc was showing a single point stereoscopy techinque that converts a single 2d video stream into stereo pairs in real time (3 frame delay, if I recall correctly). It uses motion of objects & camera between frames to create depth effect. The technique can be integrated with virtual environments for video textures, etc. Interesting idea.
ATI introduced their Fire-GL X1 boards using the FGL 9700 VPU (Visual Processing Unit), aimed at the Digital Content Creation (DCC) and CAD markets. It can do a fair job in real time work too.
Other vendors are covered quite well by the other reviews.
Monday, Dave Pape gave a talk in the Educators Program on "Building Affordable Projective Immersive Displays". Notes for the course are available on his web site. Dave presented cost breakdowns for single screen passive stereo systems using PC rendering. His estimates show a mid-range price of about $20kUS. The major costs are the projectors and trackers. Software is another issue - Dave's systems use VRCO's software, which can carry a fair license fee. Trackers have lots of issues with interference, resolution, latency and area of coverage. Cost of a good low latency wide area tracker like the IS900 can be more than $20k US.
Projectors are a big issue. Immersive systems used to use CRT projectors, which have lots of configuration parameters to correct for distortions, alignment, registration,etc. However most companies have stopped making them - except for high end simulator purposes. DLP projectors have taken over the market - unfortunately most of these do not have the correction capabilities need to do good stereo image registration. Christie Digital and Barco both have DLP projectors for active stereo, but they cost more than Dave's estimate for the entire setup. Passive Stereo generally requires two projectors and has its own alignment issues. (Note: Christie's Mirage series can do both active and passive stereo - for $50k-100k US) Passive (polarized) stereo can not be used with mirrors, as they tend to disrupt the polarization. This means the throw distance from projector to screen can be quite large. I did hear rumors of some low cost projector folks working on short throw lenses for passive displays. If you can avoid stereo (and lose some of the immersiveness), you can save a LOT - single projectors, folded optical path, no genlock requirement, etc.
Tuesday, there was a full day course on "Commodity Clusters for Immersive Projection Environments". The course provided a good overview of the issues and technologies required for cluster based IPEs, as well as some live demonstrations in the Siggraph CAL. Most multi-screen IPEs have used large multi-pipe SGI systems. Cluster IPEs use distributed rendering on a collection of lower cost PC based systems. These are not the conventional computing "clusters" like the Beowulf architecture. They generally use conventional ethernet interconnects and some form of distributed, synchronized rendering. The course organizers are responsible for three open source software projects of note: Syzygy, NetJuggler, and DICElib. Syzygy is a fairly new project (v0.4) that looks very promising. It offers several different distribution architectures. NetJuggler is a distributed version of VRJuggler. It provides coordination for multiple versions of a VRJuggler application running on seperate systems. The VRJuggler team is implementing cluster support in the next version of their project. A subproject of NetJuggler is the SoftGenLock package. This project addresses the lack of "GenLock" on lower cost graphics board. GenLock refers to the ability to synchronize the display of frames on multiple devices. It is critical for active stereo displays. While a standard feature of high end products (eg. 3d Labs Wildcat, Quantum3d), it is missing from commodity graphics boards from nVidia, ATI, Matrox, etc. SoftGenLock uses a fast communications channel (parallel port) and software register tweaking to implement genlock capability for these cards on Linux systems. It works - sorta. I've had some difficulties getting it to work well with my installation, but that is still a work in progress. I heard from several people that the next version of the nVidia Quadro boards will support genlock - something about expiring agreements with SGI. Hopefully these boards will be out by early 2003. DICELib is a less abitious project, concentrating more on the simulation synchronization and distributed application control. Overall the course was a good introduction to the issues of commodity IPE. I hope the organizers can find a way to put their presentation online somewhere.
There was a recent announcement of a Workshop on Commodity-Based Visualization Clusters in conjunction with IEEE Visualization 2002, Sunday, October 27, 2002 (Boston, MA, USA). It should a great follow-on to the topic.
Tomorrow's
Yesterday: Mapping the E-Tech Continuum; John M Fujii,
Hewlett Packard
A poster mapping of ten years of Siggraph Emerging Technology exhibits onto a 2D spiral galaxy shape.. Two 'theme' variables are assigned to each exhibit. The primary theme is mapped to a spoke on the spiral, secondary theme collects exhibits within the primary. It took me a few moments to figure it out and find previous projects. Interesting to see correlation within a cluster, although I'm not sure about the assignment of themes. Also it is not easy to see relations where primary/secondary are reversed. i.e. my Time Table of History (1991) was given a primary theme of Education with hypermedia as secondary theme. There are two other projects in this cluster. How many Hypermedia/Education? |
A New Step-in-Place Locomotion Interface for Virtual Environment
With Large Display System; Laroussi Bouguila, Tokyo Institute of Technology
|
ARS BOX with Palmist - Advanced VR-System based on commodity hardware;
Horst Hörtner, Ars Electronica Futurelab
Three channel, rear projection passive stereo display system with wireless Compaq iPaq for control. NVidia graphics boards on Linux computers, synchronization accomplished by the ArsSyncBox - derived from SoftGenLock. Rendering worked pretty well for several sequences. A tunnel simulation with odd black and white textures had significant problems with swimming textures when the eye point changed. This was a rendering bug that the artist turned into a feature. |
Block Jam; Henry Newton-Dunn, Sony Design Lab
|
Distributed Systems of Self-reconfiguring Robots; Zack Butler, et
al, Dartmouth College
|
Focus Plus Context Screens: Visual Context and Immersion on the
Desktop; Patrick Baudisch, Xerox PARC
|
Immersive and Interactive Rear-Projected Stereo DLP Reality Center;
Andrew Joel, Barco Simulation Products
Three channel rear projection, curved screen, active stereo display. Basic wide projection wall. The new & unique technology was the use of special warping hardware/software to support active stereo and edge blending for DLP projectors. A similar system was shown last year, perhaps an earlier version? It still had very noticable banding where projectors overlapped, especially when viewed off axis (sweet spot?). |
Lewis the Robotic Photographer; Cindy Grimm, Washington University
in St. Louis
Large mobile trash can with camera on top that wandered about trying to take pictures of people. Used a fixed IR light source to position itself within pre-defined area. Uses flesh tones, eye shape detection in images with legs detected by sonar range finder. Seemed to do a fair job taking snap shot quality pictures. Of more interest was how people interacted with the bot. |
NONA-Vision; Hiroo Iwata, et al University of Tsukuba
|
Occlusive Optical See-through Displays in a Collaborative Setup;
Kiyoshi
Kiyokawa,et al
|
Physiological Reaction and Presence in Stressful Virtual Environments;
Michael
Meehan,et al, UNC
|
Public Anemone: An Organic Robot Creature, Cynthia Breazeal
et al, MIT Media Lab
Mostly a robotics effort which was supposed to be responsive to its audience. Some of the touch sensitive robots had some reaction, but otherwise it didnt seem very responsive. |
Regeneration of Real Objects in the Real World; Hiroto Matsuoka,
et al, NTT Lifestyle and Environmental Technology Laboratories
|
SmartFinger : Nail-Mounted Tactile Display; Hideyuki Ando, et
al University of Tokyo
A voice coil, LED and photodetector attached to the fingernail allows haptic sensing of images. It worked ok as a quick demo when moved fairly quickly over high contrast black and white lines, etc. It also worked on images visible only under ultraviolet light. A more robust, higher resolution version with user training might have applications for the visually impaired. |
The Virtual Showcase: A Projection-Based Multi-User Augmented Reality
Display; Oliver Bimber et al
An inverted, truncated glass pyramid encases a real object, while graphics are projected up onto the sides of the pyramids. Multiple view angles are supported by the different pyramid sides. It was a pretty effective device with applications for museums, etc. |
TWISTER: A Media Booth; Kenji Tanaka et al Tachi Lab, University
of Tokyo
|
Ultrasound Visualization with the Sonic Flashlight; Damion Shelton,
George Stetton, Wilson Chang
|
Virtual Chanbara, Daijiro Koga
|
There was one VR related exhibit - "Body Language User Interface", or BLUIsculpt
™ from University of Alaska at Fairbanks/Artic
Research Supercomputer Center. This is a semi-immersive voxel
based modeling tool. The Studio version used a single 10ft rear projection
stereo display (Christie Digital's
polarized active/passive projector), with an Intersense
IS900 tracker. Virtual sculpting is an interesting idea - Sensable's
Freeform
product does it nicely with haptic display. I'm not sure the
voxel technology of BLUISculpt works as well. Perhaps an sphere controlled
interface similar to Pixelogic's ZSpheres might work better.
There was a lively discussion of the ability to read different scene graph formats. Generally, every scene graph package implements its own file format that closely reflects the internal data structures and design decisions. Reading a non-native format exposes all the differences and can be a difficult problem. Project Lodestone (or perhaps this link) is an open source attempt to create an API to address the problem. Lodestone would provide basic readers for formats and the scene graphs would implement a layer on top of Lodestone API to map its nodes into the native format. This seems to be to be akin to the XML SAX call back system. There was some disagreement about the utility of Lodestone, probably stemming from the different views of a translation API and an interchange file format. VMRL and X3D might provide a common, scene graph agnositic (??) interchange file format. Lodestone would read those files as well as other formats. While not a scene graph expert, I am not sure that the current Lodestone approach is best. The callback style API has a fair number of deficiencies, especially when there may be confusion or a design mismatch in how the graph nodes affect each other. A DOM style API would read the specific scene graph into a neutral format document graph which could then be traversed at will by the translation level.
Another controversial point was brought up by Steve Baker of PLib fame - that hardware shaders and the new crop of shader languages (Cg, OpenGL 2.0, etc.) will render conventional scene graphs oboslete. Steve bases this on his experience with the difficulty of handling the messy details of texture/material nodes in different scene graph systems. The SGI representatives noted that they are working on incorporating portions of OpenGL Shader (and other SGI Open** libraries) into Open Performer. An interesting direction for shaders appeared in the Siggraph paper "Shader-Driven Compilation of Rendering Assets" by Paul Lalonde and Eric Schenk of Electronic Arts (Canada) Inc. It does not mention scene graphs directly, but discusses techniques to compile an interactive world description including shaders for use on a variety of game platforms.
Don Brutzman distributed the latest X3D cdrom. X3D is an XML interchange format for virtual environment that supports tags derived from GeoVRML.
Mike McCann gave a presentation on MBARI's use of GeoVRML
worlds to provide access to and visualization of data from their underwater
remote operated vechicles (ROV). It is an excellent use of the technology
for educational and scientific application.
Courses, Papers, panels... There were a LOT of courses this year
- presentations from most of which are on the course cdrom. Amost more
courses (59) than papers (67!). I'm quite pleased to have the cdrom
of both. Hopefully ACM will continue to provide these for full conferences
attendees. I did not attend any of the papers/panel sessions this
year. I was hoping to catch them online.