ACM Siggraph
2002

Report by Jerry Isdale
HRL Laboratories LLC

Introduction

This is a report on my experiences at ACM Siggraph 2002 in San Antonio Texas, July 22-25.  I concentrated on the Emerging Technologies, Gallery, and Vendor exhibits with some select Courses and other events.  This was about my 8th Siggraph in the last 15 years.  It seemed much smaller in many respects than previous years, particularly the vendor exhibits. Overall, Siggraph seems to be stagnating a bit - lacking the major advances I recall from earlier years.  The full conference package now includes several pounds of paper and cdroms, including full course notes.  This makes it an excellent deal in my opinion - except I had to head right back to my hotel to drop off the load!
Index pages of all my pictures from the show are available here. If you want originals (1600x1200) email me.

Contents:

Other Reviews Location Opening Reception
Vendor Exhibits Low Cost Immersive VR Emerging Technology
Art Gallery Studio Birds Of A Feather (BOFs)
Other Stuff: Siggraph Online, Papers & Panels

Other Siggraph Reviews

Offical Siggraph Reports
Reports by a sall group of  volunteers

CG World Day1 Day2 Day3 Day4
Four one-page reports from 'The Editors of Computer Graphics World'

Extreme Tech
Shorter review touching on highlights with pics

VRefresh.org
A fairly long complete review, although the exhibitor listing reads like excerpts from press releases without commentary.

Toms' Hardware
Mostly on NVidia and other graphics chips, but with a bit on the rest of wonderland.

Location: San Antonio Texas

   
Hot, Humid, but the river walk area was cooler and quite nice with lots of resturaunts, bars, etc within easy walking distance.

   There is not a lot of tourist stuff around San Antonio, however the Alamo is a short walk away.  Engulfed by a big city, it certainly looks different than what I recall from the movies.  The building everyone recognizes as the Alamo is actually the chapel and was a small side building at the time of the battle.

 Monday night Opening  Reception

The Monday night Opening  Reception was held at Sunset Station. Shuttle busses from the conference center and hotels ran on a regular basis.  Food, drink and people were plentiful.  The venue was quite large with several large rooms, lots of bars, a large outdoor open area with stage, etc.
 
one of the indoor rooms where they showed demo reels
a dance room that was VERY LOUD (guess I'm getting old too)
Megan and Joshua (?) - A couple recent grads from Full Sail soaking up all the knowledge they could, and enjoying Siggraph's Opening Reception.
   Diana Lee, Secretary 2001-2001 LA ACM Siggraph, gets her face painted and then a hug from Howard Neely (HRL Labs & LA Siggraph)

Vendor Exhibits

Panorama of exhibit show floor
The Exhibit area was considerably smaller than in previous years.  Many people commented on being able to cover the entire floor in a few brief hours.  Personally, I made several trips around the floor.  Another sign of the economic health of the industry was the almost total lack of give away Chachkas -  not even t-shirts!  My kids were very disappointed.  There were several  places handing out  mints.  ILM was handing out packets of postcards from Star Wars Episode II.  Quite a few places were giving away demo CDs - mostly mediocre multimedia sales presentations.  Side Effects Software was a notable exception - they gave away the 'Apprentice Edition' of their high end animation tool Houdini. As a contributor to is predecessor, PRISMS, I was glad to pick up a copy - now I need to find time to learn it! 

One interesting new (?) thing on the floor - several vendors  had pen & paper drawing areas setup, sometimes with live models.  Some had a contest for images that included the company logo (nVidia).  Nice low tech appeal to the hordes of artists wandering the floor.

Intersense - showing off their wireless IS900 setups in several places. There were several single screen projection displays that used a set of IS900 trackers around the frame.  Not well covered was the SoniWing, part of their SimTracker configuration. This single piece fixed frame is intended for cockpit and similar small area tracking. It was on display in the FCS Robotics booth along with the Haptic Master robotic arm.  Intersense also has a set of 3 SoniStrips fixed in a rigid array that can be used for small areas.  The IS900 is an excellent 6DOF tracker, based on my experience with it here at HRL.  The entry price is about $17k (US).

Pixelogic was demonstrating their ZBrush product.  Their ZSpheres modeling tool caught my eye, having recently seen BLUSculpt. ZSphere allows fairly sophisticated modeling based on spheres.  It would be interesting to try such a tool in an immersive environment  - of couse the UI would need radical modification from its current WIMP form.

nVidia was showing their current graphics hardware, letting out only a little on the next gen NV30 GPU (Ace's Hardware has some info leaked after the show). They had some impressive demos of real time lighting/shading on XBox type hardware.  They also had some courses on their new GPU language Cg.  I did pick up rumors that genlock would be featured on the NV30 Quadro boards, but the nVidia folks would only say it would be on some future card. Check out Toms' Hardware's review of Siggraph for more on nVidia

Panoram Technologies showed their new "Digital Imaging Table", a flat 65" display with resolution of 2300x1850 (0.53mm dot pitch). uses four autocalibrating projectors with seamless edge blending. Olympus provides autocalibration hardware/software. The image was viewable from extreme off angles and looked fantastic - bright, sharp, good color. It does not yet support motion images (animation, video, etc). Cost is about $300k, so dont get your hopes up. The autocalibration and edge blending technology might be applicable to a large wall tiled display. Olympus rep at the booth was open to discussing such applications.

Actuality Systems, Inc.showed their Perspecta® Spatial 3D Platform - one of the spinning surface projection devices.

Z Corp was one of several 3D printer companies exhibiting at the show.  The Z406 is a full color  ink jet based technology. Models can be built using several colors, including images on the final surface.  They are offering one free model printout from your CAD files.  They also support VRML and several GIS formats.  Their display included one 8" square terrain model with topo-colors.

For 3D Inc  was showing a single point stereoscopy techinque that converts a single 2d video stream into stereo pairs in real time (3 frame delay, if I recall correctly).  It uses motion of objects & camera between frames to create depth effect. The technique can be integrated with virtual environments for video textures, etc. Interesting idea.

ATI introduced their Fire-GL X1 boards using the FGL 9700 VPU (Visual Processing Unit), aimed at the Digital Content Creation (DCC) and CAD markets.  It can do a fair job in real time work too.

Other vendors are covered quite well by the other reviews.

Low Cost Immersive Projection Environments (IPEs)

Immersive Projection systems are perhaps the hottest VR tech. Mechdyne and Fakespace both announced multimillion (US) dollar sales/installations at the show.  These SGI based solutions are impressive for the size of data and imagery, however there are not many places that can afford the price tag - or the maintenance costs!   I talked with quite a few researchers who have canceled their $50-100K+/yr SGI maintenance contracts on older systems.  Upgrading to new systems with new maintenance fees is just not feasible.  People are looking to lower cost solutions - as demonstrated by several courses and demonstrations based on PC rendering platforms.

Monday, Dave Pape gave a talk in the Educators Program on "Building Affordable Projective Immersive Displays".  Notes for the course are available on his web site. Dave presented cost breakdowns for single screen passive stereo systems using PC rendering.  His estimates show a mid-range price of about $20kUS.  The major costs are the projectors and trackers.  Software is another issue - Dave's systems use VRCO's software, which can carry a fair license fee. Trackers have lots of issues with interference, resolution, latency and area of coverage. Cost of a good low latency wide area tracker like the IS900 can be more than $20k US.

Projectors are a big issue.  Immersive systems used to use CRT projectors, which have lots of configuration parameters to correct for distortions, alignment, registration,etc. However most companies have stopped making them - except for high end simulator purposes.  DLP projectors have taken over the market - unfortunately most of these do not have the correction capabilities need to do good stereo image registration. Christie Digital and Barco both have DLP projectors for active stereo, but they cost more than Dave's estimate for the entire setup.  Passive Stereo generally requires two projectors and has its own alignment issues.  (Note: Christie's Mirage series can do both active and passive stereo - for $50k-100k US)  Passive (polarized) stereo can not be used with mirrors, as they tend to disrupt the polarization. This means the throw distance from projector to screen can be quite large.  I did hear rumors of some low cost projector folks working on short throw lenses for passive displays. If you can avoid stereo (and lose some of the immersiveness), you can save a LOT - single projectors, folded optical path, no genlock requirement, etc.

Tuesday, there was a full day course on "Commodity Clusters for Immersive Projection Environments".  The course provided a good overview of the issues and technologies required for cluster based IPEs, as well as some live demonstrations in the Siggraph CAL.  Most multi-screen IPEs have used large multi-pipe SGI systems.  Cluster IPEs use distributed rendering on a collection of lower cost PC based systems.  These are not the conventional computing "clusters" like the Beowulf architecture.  They generally use conventional ethernet interconnects and some form of distributed, synchronized rendering.  The course organizers are responsible for three open source software projects of note: Syzygy, NetJuggler, and DICElib. Syzygy is a fairly new project (v0.4) that looks very promising.  It offers several different distribution architectures.  NetJuggler is a distributed version of VRJuggler. It provides coordination for multiple versions of a VRJuggler application running on seperate systems.  The VRJuggler team is implementing cluster support in the next version of their project.  A subproject of NetJuggler is the SoftGenLock package.  This project addresses the lack of "GenLock" on lower cost graphics board.  GenLock refers to the ability to synchronize the display of frames on multiple devices.  It is critical for active stereo displays.  While a standard feature of high end products (eg. 3d Labs Wildcat, Quantum3d), it is missing from commodity graphics boards from nVidia, ATI, Matrox, etc. SoftGenLock uses a fast communications channel (parallel port) and software register tweaking to implement genlock capability for these cards on Linux systems. It works - sorta.  I've had some difficulties getting it to work well with my installation, but that is still a work in progress.  I heard from several people that the next version of the nVidia Quadro boards will support genlock - something about expiring agreements with SGI.  Hopefully these boards will be out by early 2003.  DICELib is a less abitious project, concentrating more on the simulation synchronization and distributed application control.   Overall the course was a good introduction to the issues of commodity IPE.  I hope the organizers can find a way to put their presentation online somewhere.

There was a recent announcement of a Workshop on Commodity-Based Visualization Clusters in conjunction with IEEE Visualization 2002, Sunday, October 27, 2002 (Boston, MA, USA). It should a great follow-on to the topic.

Emerging Technology

I did not try out every exhibit, but here are my views on those I did try.
Tomorrow's Yesterday: Mapping the E-Tech Continuum;   John M Fujii, Hewlett Packard
A poster mapping of ten years of Siggraph Emerging Technology exhibits onto a 2D spiral galaxy shape.. Two 'theme' variables are assigned to each exhibit. The primary theme is mapped to a spoke on the spiral, secondary theme collects exhibits within the primary.  It took me a few moments to figure it out and find previous projects.  Interesting to see correlation within a cluster, although I'm not sure about the assignment of themes.  Also it is not easy to see relations where primary/secondary are reversed.  i.e. my Time Table of History (1991) was given a primary theme of Education with hypermedia as secondary theme. There are two other projects in this cluster.  How many Hypermedia/Education?
A New Step-in-Place Locomotion Interface for Virtual Environment With Large Display System; Laroussi Bouguila, Tokyo Institute of Technology
A front-projection based display with small (pressure sensative?) turntable on which participant stands and walks. Environment is a 2d maze.  Small steps required, turning your feet reorients the v-world briefly then the turntable rotates back.  It worked pretty well as long as you dont walk too fast.  I was able to run through the maze walls with quick steps.
ARS BOX with Palmist - Advanced VR-System based on commodity hardware; Horst Hörtner, Ars Electronica Futurelab
Three channel, rear projection passive stereo display system with wireless Compaq iPaq for control.  NVidia graphics boards on Linux computers, synchronization accomplished by the ArsSyncBox - derived from SoftGenLock. Rendering worked pretty well for several sequences.  A tunnel simulation with odd black and white textures had significant problems with swimming textures when the eye point changed.  This was a rendering bug that the artist turned into a feature.
Block Jam; Henry Newton-Dunn, Sony Design Lab
. A tangible interface to music synthesizer.  Arranging and clicking on blocks controls sequencing.  An intial musical phrase is inserted at an end block and 'transmitted' through the chain. Corners and end points reflect the sounds. It looked like an interesting collaborative game, although the processing of each block was quite limited.  It might be fun for longer if there were more options available (at the cost of a more complex interface?)
Distributed Systems of Self-reconfiguring Robots; Zack Butler, et al, Dartmouth College
  Small robots that join together for locomtion through reconfiguring the cluster.  Two types of bots were shown, only one form pictured here.  Bots were assembled from parts built using 3D CAD printers. They moved kinda slow, but its hard to get both speed and torque from small motors.  Funded by NSF, not DARPA.
Focus Plus Context Screens: Visual Context and Immersion on the Desktop; Patrick Baudisch, Xerox PARC
An LCD panel inset in front projected display provides high resolution detail within larger, low res image.  It worked pretty nicely, although user's shadow could be disconcerting. No intereaction was demonstrated while I was watching.
Immersive and Interactive Rear-Projected Stereo DLP Reality Center;  Andrew Joel, Barco Simulation Products
Three channel rear projection, curved screen, active stereo display.  Basic wide projection wall. The new & unique technology was the use of special warping hardware/software to support active stereo and edge blending for DLP projectors. A similar system was shown last year, perhaps an earlier version?   It still had very noticable banding where projectors overlapped, especially when viewed off axis (sweet spot?). 
Lewis the Robotic Photographer; Cindy Grimm, Washington University in St. Louis
Large mobile trash can with camera on top that wandered about trying to take pictures of people.  Used a fixed IR light source to position itself within pre-defined area. Uses flesh tones, eye shape detection in images with legs detected by sonar range finder.  Seemed to do a fair job taking snap shot quality pictures.  Of more interest was how people interacted with the bot.
NONA-Vision; Hiroo Iwata, et al University of Tsukuba
System used nine cameras/displays to achieve semi-immersive telepresence.  Motion of an array of cameras was slaved to an array of displays moved by participant. Cameras were pointed at a small diarama of toys, however the audience were quite visible and interacted with both the diarama and the driver. Interesting idea, but the two distinct view targets was disconcerting.  It may have been more effective if only the audience or only the toys were visible.
Occlusive Optical See-through Displays in a Collaborative Setup; Kiyoshi Kiyokawa,et al
Optical see-through AR displays generally cannot show occlusion by real world objects.  This HMD uses a complex set of optics to provide occlusion. It works but with a highly limited field of view and range.  The wearer's hands and other objects are sensed by a set of cameras 
Physiological Reaction and Presence in Stressful Virtual Environments; Michael Meehan,et al, UNC
This is the classic presense experimental Pit room from UNC. Wearing a HMD, tracker and physiological monitors, a patron is asked to interact with real and virtual objects in a training room and then asked to walk through a door onto a ledge 20 ft above a living room. The idea is to use physiological reactions to measure the sense of presence.  Real objects in the training room, which the patron sees before doning the HMD, and a real 1" platform ledge around the pit room greatly enhance the experience.   A fair percentage of patrons refuse to step through the door.  I knew a lot about the project before trying it, yet I definately reacted when asked to step to the edge and drop a ball down.  Part of my reaction was due to the lack of an avatar for my feet - when I looked down to find the edge of the ledge, I could not see my feet as I slid them forward to find the edge. That was very disconcerting, although adding trackers for feet would be technically and logistically difficult.  There is already a significant teather carried by two assistants.  This installation was a real experiment, adding significantly to the number of total subjects.  It did require relaxing the rules for human experiments a bit - usually they exclude people who have had alcohol in the last 24hrs, are sleep deprived or under stress.  Those would eliminate almost everyone attending Siggraph.  The experiment had significant loggistics, requiring an small army of grad students and it still took a signficant amount of time to get people through..
Public Anemone: An Organic Robot Creature, Cynthia Breazeal et al, MIT Media Lab
Mostly a robotics effort which was supposed to be responsive to its audience. Some of the touch sensitive robots had some reaction, but otherwise it didnt seem very responsive.
Regeneration of Real Objects in the Real World; Hiroto Matsuoka, et al, NTT Lifestyle and Environmental Technology Laboratories 
An AR-toolkit based project using tablet PC with a camera attached to the back.  Virtual objects appear in several forms over the fiducial targets depending on distance, angle, etc.  I liked the effect, but more impressive was the 3d capture system collecting geometry and digital images of highly interreflective objects like glass, porcelain, etc.
SmartFinger : Nail-Mounted Tactile Display; Hideyuki Ando, et al University of Tokyo
A voice coil, LED and photodetector attached to the fingernail allows haptic sensing of images.  It worked ok as a quick demo when moved fairly quickly over high contrast black and white lines, etc. It also worked on images visible only under ultraviolet light. A more robust, higher resolution version with user training might have applications for the visually impaired.
The Virtual Showcase: A Projection-Based Multi-User Augmented Reality Display; Oliver Bimber et al
An inverted, truncated glass pyramid encases a real object, while graphics are projected up onto the sides of the pyramids. Multiple view angles are supported by the different pyramid sides.  It was a pretty effective device with applications for museums, etc.
TWISTER: A Media Booth; Kenji Tanaka et al Tachi Lab, University of Tokyo
Large drum of spinning LED arrays creates a full 360 deg surround, auto-stereoscopic display for a patron willing to stand inside (trust the techies, it wont fly apart or chop you up :-). This version (#3) was built especially to travel to Siggraph. Future versions will have higher resolution (much needed) and perhaps cameras to allow for mutual telexistence portals.  However, they probably wont travel much.
Ultrasound Visualization with the Sonic Flashlight; Damion Shelton, George Stetton, Wilson Chang
Attach a half-silvered mirror and display to an ultrasound wand and you have x-ray vision (ok, ultrasonic vision but you get the idea).  Nice idea, decent execution, but it needed a better demonstration.  I saw it early Monday (1st day) and they were using water balloons as the target objects.  Only one balloon had anything other than water inside, so it was hard to get the idea.  Looking at my hand was a bit better.  Unfortunately, there were no pregnant women around.
Virtual Chanbara, Daijiro Koga
Jerry Isdale in Virtual Chanbara Jerry Isdale in Virtual ChanbaraBen Schaeffer (U.Illinois Syzygy) in Virtual Chanbara A virtual samurai sword fighting game with a force feedback 'sword' device - the GEKI2.  It uses a pair of spinning wheels are abruptly stopped to simulate collisions of the virtual sword with the opponent or his sword. It works, but the delay while the wheels spin up again after each impulse adds an unnatural feel.  I also had some problems getting the wheel caught up in the HMD teather.

Art Gallery VR

I checked out several VR installations in the Art Gallery this year.
Uzume used single screen rear projection stereo display to interact with a dynamic environment of chaotic attractors.  Although the graphics were generally fairly simple mathematical plots (as opposed to complex textured polygons - the plots could be quite complex), I found it to be quite engaging. I also had a nice chat with Roland Blach, who provided technical expertise.  NEWYORKEXITNEWYORK used a large front projection screen to show an artistic interpretation of a city.  The virtual world was built using simple flat polygons textured with images captured from the streets of New York City.  There were several simple cycle animations that gave a sense of activity to the world. Navigation was by joystick, and there were several modes which I found a bit confusing.
Journey to the Oceans of the World was a surround animation rather than an interactive virtual environment.  It used six (?) projectors to display animation from a bank of dvd players onto simple cloth screens (bed sheets?) that surrounded a set of bean bag chairs.  The installation was located on a lower floor, far from most other activities. If I had not been attending a BOF next door, I would have missed it entirely.  Although not interactive, it was a nice piece - sort of like sitting at the bottom of a reef aquarium watching the creatures swim by.
Shadow Garden was an Artifical Reality system in the footsteps of Myron Kruger (although the artists did not know Myron's work).  Two front projection displays showed different 2d environments that interacted with either the shadow of the patron, or the light of a flashlight.  A simple video camera provided the input.  While not VR, the installation did show the fun of interacting with animation, and at a farily low cost.

Studio

The Studio is an area where attendees can sign up to actually try out products and research projects.  This year it included several animation, 3d printing, and motion capture tools.  It also had several hammocks which were quite popular resting places for those who found them.

There was one VR related exhibit - "Body Language User Interface", or BLUIsculpt ™ from University of Alaska at Fairbanks/Artic Research Supercomputer Center.  This is a semi-immersive voxel based modeling tool.  The Studio version used a single 10ft rear projection stereo display (Christie Digital's polarized active/passive projector), with an Intersense IS900 tracker.  Virtual sculpting is an interesting idea - Sensable's Freeform product does it nicely with haptic display.  I'm not sure the voxel technology of BLUISculpt works as well.  Perhaps an sphere controlled interface similar to Pixelogic's ZSpheres might work better.

BOFs

Wednesday -  Scene Graph BOF

This BOF was a gathering of people interested in scene graphs, particularly open source scene graphs. Open Scene Graph, OpenSG, OpenRM, PLib/SSG, and SGL are several of the open source scene graphs available. SGI's Open Performer is not open source, but still one of the most popular for VR projects.  All were represented at the BOF, which followed the OpenSceneGraph BOF and started with a presentation on the state of OpenSG (v1.1) by Dirk Reiners.

There was a lively discussion of the ability to read different scene graph formats. Generally, every scene graph package implements its own file format that closely reflects the internal data structures and design decisions.  Reading a non-native format exposes all the differences and can be a difficult problem.  Project Lodestone (or perhaps this link) is an open source attempt to create an API to address the problem.  Lodestone would provide basic readers for formats and the scene graphs would implement a layer on top of  Lodestone API to map its nodes into the native format.  This seems to be to be akin to the XML SAX call back system.  There was some disagreement about the utility of Lodestone, probably stemming from the different views of a translation API and an interchange file format. VMRL and X3D might provide a common, scene graph agnositic (??) interchange file format. Lodestone would read those files as well as other formats.  While not a scene graph expert, I am not sure that the current Lodestone approach is best. The callback style API has a fair number of deficiencies, especially when there may be confusion or a design mismatch in how the graph nodes affect each other.  A DOM style API would read the specific scene graph into a neutral format document graph which could then be traversed at will by the translation level.

Another controversial point was brought up by Steve Baker of PLib fame - that hardware shaders and the new crop of shader languages (Cg, OpenGL 2.0, etc.) will render conventional scene graphs oboslete.  Steve bases this on his experience with the difficulty of handling the messy details of texture/material nodes in different scene graph systems.  The SGI representatives noted that they are working on incorporating portions of OpenGL Shader (and other SGI Open** libraries) into Open Performer.  An interesting direction for shaders appeared in the Siggraph paper "Shader-Driven Compilation of Rendering Assets" by Paul Lalonde and Eric Schenk of Electronic Arts (Canada) Inc.  It does not mention scene graphs directly, but discusses techniques to compile an interactive world description including shaders for use on a variety of game platforms.

Wed - SigCARTO BOF

(another writeup of this BOF by Theresa-Marie Rhyne, ACM SIGGRAPH Carto Project Director)
I got here late so see Theresa-Marie's writeup for most of what happened.  Mostly, I heard about the GeoVRML project.  Martin Reddi of SRI has been leading the discussion list, but is moving on to work at Pixar, so leadership is moving to Mike McCann of the Montery Bay Aquarium Research Institute (Mhttp://www.mbari.org/).  Martin and several co-authors have a new book out on "Level of Detail for 3D Graphics". It looks quite good.  Martin's departure from SRI is also means the open source  tsmAPI and less open  Terravision will have less active development.  Martin indicated SRI may open source Terravision.

Don Brutzman distributed the latest X3D cdrom.  X3D is an XML interchange format for virtual environment that supports tags derived from GeoVRML.

Mike McCann gave a presentation on MBARI's use of GeoVRML worlds to provide access to and visualization of data from their underwater remote operated vechicles (ROV).  It is an excellent use of the technology for educational and scientific application.
 

Other Stuff

Siggraph Online: This year and last, the Online Committee has been collecting presentations and video taping the papers/panels in an effort to make them available on the web after the conference.  Last year was a BIG effort that has yet to be completed. The beta site has been online since January but still has some holes.  This year they scaled back the effort, eliminating courses, etc.  The basic effort was nearly complete by the end of the show, but some loggistic problems have delayed its publication. Hopefully both of these excellent resources will be up and available to the public soon.

Courses, Papers, panels...  There were a LOT of courses this year - presentations from most of which are on the course cdrom. Amost more courses (59) than papers (67!).  I'm quite pleased to have the cdrom of both.  Hopefully ACM will continue to provide these for full conferences attendees.  I did not attend any of the papers/panel sessions this year. I was hoping to catch them online.