this needs to be updated wrt VRML, etc.]
The storage of information on objects and the world is a major part of the
design of a VR system. The primary things that are stored in the World Database
(or World Description Files) are the objects that inhabit the world, scripts
that describe actions of those objects or the user (things that happen to the
user), lighting, program controls, and hardware device support.
There are a number of different ways the world information may be stored: a
single file, a collection of files, or a database. The multiple file method is
one of the more common approaches for VR development packages. Each object has
one or more files (geometry, scripts, etc.) and there is some overall 'world'
file that causes the other files to be loaded. Some systems also include a
configuration file that defines the hardware interface connections.
Sometimes the entire database is loaded during program startup, other systems
only read the currently needed files. A real database system helps tremendously
with the latter approach. An Object Oriented Database would be a great fit for
a VR system, but I am not aware of any projects currently using one.
The data files are most often stored as ASCII (human readable) text files.
However, in many systems these are replaced by binary computer files. Some
systems have all the world information compiled directly into the application.
Objects in the virtual world can have geometry, hierarchy, scripts, and other
attributes. The capabilities of objects has a tremendous impact on the
structure and design of the system. In order to retain flexibility, a list of
named attribute/values pairs is often used. Thus attributes can be added to the
system without requiring changes to the object data structures.
These attribute lists would be addressable by name (i.e. cube.mass => mass
of the cube object). They may be a scalar, vector, or expression value. They
may be addressable from within the scripts of their object. They might be
accessible from scripts in other objects.
An object is positionable and orientable. That is, it has a location and
orientation in space. Most objects can have these attributes modified by
applying translation and rotation operations. These operations are often
implemented using methods from vector and matrix algebra.
An object may be part of an object part HIERARCHY with a parent, sibling, and
child objects. Such an object would inherit the transformations applied to it's
parent object and pass these on to it's siblings and children. Hierarchies are
used to create jointed figures such as robots and animals. They can also be
used to model other things like the sun, planets and moons in a solar
Additionally, an object should include a BOUNDING VOLUME. The simplest
bounding volume is the Bounding Sphere, specified by a center and radius.
Another simple alternative is the Bounding Cube. This data can be used for
rapid object culling during rendering and trigger analysis. Objects whose
bounding volume is completely outside the viewing area need not be transformed
or considered further during rendering. Collision detection with bounding
spheres is very rapid. It could be used alone, or as a method for culling
objects before more rigorous collision detection algorithms are applied.
The modeling of object shape and geometry is a large and diverse field. Some
approaches seek to very carefully model the exact geometry of real world
objects. Other methods seek to create simplified representations. Most VR
systems sacrifice detail and exactness for simplicity for the sake of rendering
The simplest objects are single dimensional points. Next come the two
dimensional vectors. Many CAD systems create and exchange data as 2D views.
This information is not very useful for VR systems, except for display on a 2D
surface within the virtual world. There are some programs that can reconstruct
a 3D model of an object, given a number of 2D views.
The sections below discuss a number of common geometric modeling methods. The
choice of method used is closely tied to the rendering process used. Some
renderers can handle multiple types of models, but most use only one,
especially for VR use. The modeling complexity is generally inversely
proportional to the rendering speed. As the model gets more complex and
detailed, the frame rate drops.
PolyLines & PolyPoints
The simplest 3D objects are known as PolyPoints and PolyLines. A PolyPoint is
simply a collection of points in space. A Polyline is a set of vectors that
form a continuous line.
The most common form of objects used in VR systems are based on flat
polygons. A polygon is a planar, closed multi-sided figure. They maybe convex
or concave, but some systems require convex polygons. The use of polygons often
gives objects a faceted look. This can be offset by more advanced rendering
techniques such as the use of smooth shading and texture mapping.
Some systems use simple triangles or quadrilaterals instead of more general
polygons. This can simplify the rendering process, as all surfaces have a known
shape. However, it can also increase the number of surfaces that need to be
Polygon Mesh Format (aka Vertex Join Set) is a useful form of polygonal
object. For each object in a Mesh, there is a common pool of Points that are
referenced by the polygons for that object. Transforming these shared points
reduces the calculations needed to render the object. A point at the edge of a
cube is only processed once, rather once for each of the three edge/polygons
that reference it. The PLG format used by REND386 is an example of a Polygonal
Mesh, as is the BYU format used by the 'ancient' MOVIE.BYU program.)
The geometry format can support precomputed polygon and vertex normals. Both
Polygons and vertices should be allowed a color attribute. Different renderers
may use or ignore these and possibly more advanced surface characteristics.
Precomputed polygon normals are very helpful for backface polygon removal.
Vertices may also have texture coordinates assigned to support texture or other
image mapping techniques.
Some systems provide only Primitive Objects, such as cubes, cones, and
spheres. Sometimes, these objects can be slightly deformed by the modeling
package to provide more interesting objects.
Modeling & Boolean Operations
Solid Modeling (aka Computer Solid Geometry, CSG) is one form of geometric
modeling that uses primitive objects. It extends the concept by allowing
various addition, subtraction, Boolean and other operations between these
primitives. This can be very useful in modeling objects when you are concerned
with doing physical calculations, such as center of mass, etc. However, this
method does incur some significant calculations and is not very useful for VR
applications. It is possible to convert a CSG model into polygons. Various
complexity polygonal models (# polygons) could be made from a single high
resolution ''metaobject" of a CSG type.
Another advanced form of geometric modeling is the use of curves and curved
surfaces (aka patches). These can be very effective in representing complex
shapes, like the curved surface of an automobile, ship or beer bottle. However,
there is significant calculation involved in determining the surface location
at each pixel, thus curve based modeling is not used directly in VR systems. It
is possible, however, to design an object using curves and then compute a
polygonal representation of those curved patches. Various complexity polygonal
models could be made from a single high resolution 'metaobject'.
Geometry (aka morphing)
It is sometimes desirable to have an object that can change shape. The shape
might simply be deformed, such a bouncing ball or the squash/stretch used in
classical animation ('toons'), or it might actually undergo metamorphosis into
a completely different geometry. The latter effect is commonly known as
'morphing' and has been extensively used in films, commercials and television
shows. Morphing can be done in the image domain (2D morph) or in the geometry
domain (3D morph). The latter is applicable to VR systems. The simplest method
of doing a 3D morph is to precompute the various geometry's and step through
them as needed. A system with significant processing power can handle real time
Objects & Surface of Revolution
A common method for creating objects is known as Sweeping and Surfaces of
Revolution. These methods use an outline or template curve and a backbone. The
template is swept along the backbone creating the object surface (or rotated
about a single axis to create a surface of revolution). This method may be used
to create either curve surfaces or polygonal objects. For VR applications, the
sweeping would most likely be performed during the object modeling (creation)
phase, and the resulting polygonal object stored for real time use.
Maps & Billboard Objects
As mentioned in the section on rendering, texture maps (images) can be used to
provide the appearance of more geometric complexity without the geometric
calculations. Using flat polygonal objects that maintain an orientation towards
the eye/camera (billboards) and multiple texture maps can extend this trick
even further. Texture maps, even without billboard objects, are an excellent
way to increase apparent scene complexity. Variations on the image mapping
concept are also used to simulate reflections, etc.
Lighting is a very important part of a virtual world (if it is visually
rendered). Lights can be ambient (everywhere), or located. Located lights have
position and may have orientation, color, intensity and a cone of illumination.
The more complex the light source, the more computation is required to simulate
its effect on objects.
Cameras or viewpoints may be described in the World Database. Generally, each
user has only one viewpoint at a time (ok, two closely spaced viewpoints for
stereoscopic systems). However, it may be useful to define alternative cameras
that can be used as needed. An example might be an overhead camera that shows a
schematic map of the virtual world and the user's location within it (You Are
and Object Behavior
A virtual world consisting only of static objects is only of mild interest.
Many researchers and enthusiasts of VR have remarked that interaction is the
key to a successful and interesting virtual world. This requires some means of
defining the actions that objects take on their own and when the user (or other
objects) interact with them. This i refer to generically as the World
Scripting. I divide the scripts into three basic types: Motion Scripts, Trigger
Scripts and Connection Scripts
Scripts may be textual or they might be actually compiled into the program
structure. The use of visual programming languages for world design was
pioneered by VPL Research with their Body Electric system. This Macintosh based
language used 2d blocks on the screen to represent inputs, objects and
functions. The programmer would connect the boxes to indicate data flow.
There is no common scripting language used in today's VR products. The
commercial authoring packages, such as VR Studio, VREAM and Superscape all
contain some form of scripting language. Autodesk's CDK has the "Cyberspace
Description Format" (CDF) and the Distributed Shared Cyberspace Virtual
Representation (DSCVR) database. These are only partially implemented in the
current release. They are derived from the Linda distributed programming
language/database system. ("Coordiantation Languages and their Significance",
David Gelernter and Nicholas Carriero, Communications of the ACM, Feb 1992
V35N2). On the homebrew/freeware side, some people are experimenting with
several Object Oriented interpretive languages such as BOB ("Your own tiny
Object-Oriented Language", David Betz, DrDobbs Journal Sept 1991). Object
Orientation, although perhaps not in the conventional class-inheritance
mechanism, is very nicely suited to world scripting. Interpretive langauges are
faster for development, and often more accessible to 'non-programmers'.
Motion scripts modify the position, orientation or other attributes of an
object, light or camera based on the current system tick. A 'tick' is one
advancement of the simulation clock. Generally, this is equivalent to a single
frame of visual animation. (VR generally uses Discrete Simulation methods)
For simplicity and speed, only one motion script should be active for an
object at any one instant. Motion scripting is a potentially powerful
feature, depending on how complex we allow these scripts to become. Care must
be exercised since the interpretation of these scripts will require time, which
impacts the frame and delay rates.
Additionally, a script might be used to attach or detach an object from a
hierarchy. For example, a script might attach the user to a CAR object when he
wishes to drive around the virtual world. Alternatively, the user might 'pick
up' or attach an object to himself.
or Procedural Modeling and Simulation
A complex simulation could be used that models the interactions of the real
physical world. This is sometimes referred to as Procedural Modeling. It can be
a very complex and time consuming application. The mathematics required to
solve the physical interaction equations can also be fairly complex. However,
this method can provide a very realistic interaction mechanism. (for more on
Physical Simulation, see the book by Ronen Barzel listed in the Computer
Graphics Books section)
A simpler method of animation is to use simple formulas for the motion of
objects. A very simple example would be "Rotate about Z axis once every 4
seconds". This might also be represented as "Rotate about Z 10 radians each
A slightly more advanced method of animation is to provide a 'path' for the
object with controls on its speed at various points. These controls are
sometimes referred to as "slow in-out". They provide a much more realistic
motion than simple linear motion.
If the motion is fixed, some systems can precompute the motion and provide a
'channel' of data that is evaluated at each time instance. This may be a simple
lookup table with exact values for each frame, or it may require some sort of
Trigger Scripts are invoked when some trigger event occurs, such as collision,
proximity or selection. The VR system needs to evaluate the trigger parameters
at each TICK. For proximity detectors, this may be a simple distance check from
the object to the 3D eye or effector object (aka virtual human) Collision
detection is a more involved process. It is desirable but may not be practical
without off loading the rendering and some UI tasks from the main processor.
Connection scripts control the connection of input and output devices to
various objects. For example a connection script may be used to connect a glove
device to a virtual hand object. The glove movements and position information
is used to control the position and actions of the hand object in the virtual
world. Some systems build this function directly into the program. Other
systems are designed such that the VR program is almost entirely a connection
The user must be given some indication of interaction feedback when the
virtual cursor selects or touches an object. Crude systems have only the visual
feedback of seeing the cursor (virtual hand) penetrate an object. The user can
then grasp or otherwise select the object. The selected object is then
highlighted in some manner. Alternatively, an audio signal could be generated
to indicate a collision. Some systems use simple touch feedback, such as a
vibration in the joystick, to indicate collision, etc.
User Interface/Control Panels
A VR system often needs to have some sort of control panels available to the
user. The world database may contain information on these panels and how they
are integrated into the application. Alternatively, they may be a part of the
There are several ways to create these panels. There could be 2D menus that
surround a WoW display, or are overlaid onto the image. An alternative is to
place control devices inside the virtual world. The simulation system must then
note user interaction with these devices as providing control over the world.
One primary area of user control is control of the viewpoint (moving around
within the virtual world). Some systems use the joystick or similar device to
move. Others use gestures from a glove, such as pointing, to indicate a motion
The user interface to the VW might be restricted to direct interaction in the
3D world. However, this is extremely limiting and requires lots of 3D
calculations. Thus it is desirable to have some form of 2D Graphical user
interface to assist in controlling the virtual world. These 'control panels' of
the would appear to occlude portions of the 3D world, or perhaps the 3D world
would appear as a window or viewport set in a 2D screen interface. The 2D
interactions could also be represented as a flat panel floating in 3D space,
with a 3D effector controlling them.
There are four primary types of 2D controls and displays. (controls cause
changes in the virtual world, displays show some measurement on the VW.)
Buttons, Sliders, Gauges and Text. Buttons may be menu items with either icons
or text identifiers. Sliders are used for more analog control over various
attributes. A variation of a slider is the dial, but these are harder to
implement as 2D controls. Gauges are graphical depiction's of the value of some
attribute(s) of the world. Text may be used for both control and display. The
user might enter text commands to some command parser. The system may use text
displays to show the various attributes of the virtual world.
An additional type of 2D display might be a map or locator display. This would
provide a point of reference for navigating the virtual world.
The VR system needs a definition for how the 2D cursor effects these areas. It
may be desirable to have a notion of a 'current control' that is the focus of
the activity (button pressed, etc.) for the 2D effector. Perhaps the arrow keys
on the keyboard could be used to change the current control, instead of using
the mouse (which might be part of the 3D effector at present).
Some systems place the controls inside the virtual world. These are often
implemented as a floating control panel object. This panel contains the usual
2D buttons, gauges, menu items, etc. perhaps with a 3D representation and
There have also been some published articles on 3D control Widgets. These are
interaction methods for directly controlling the 3D objects. One method
implemented at Brown University attaches control handles to the objects. These
handles can be grasped, moved, twisted, etc. to cause various effects on an
object. For example, twisting one handle might rotate the object, while a
'rack' widget would provide a number of handles that can be used to deform the
object by twisting its geometry.
Control & Connections
The world database may contain information on the hardware controls and how
they are integrated into the application. Alternatively, they may be a part of
the program code. Some VR systems put this information into a configuration
file. I consider this extra file simply another part of the world database.
The hardware mapping section would define the input/output ports, data speeds,
and other parameters for each device. It would also provide for the logical
connection of that device to some part of the virtual world. For example a
position tracker might be associated with the viewer's head or hand.
If the system supports the division of the virtual world into different areas,
the world database would need multiple scene descriptions. Each area
description would give the names of objects in scene, stage description (i.e.
size, backgrounds, lighting, etc.). There would also be some method of moving
between the worlds, such as entering a doorway, etc., that would most likely be
expressed in object scripts.