Beruflich Dokumente
Kultur Dokumente
Revision History
Issue Date Person Reason - include problem #, ECN #, etc.
0.2 May 5, 2001 Nigel Brooke Updated version with detailed class information.
0.3 June 5, 2001 Nigel Brooke Updates after internal review by Bert Sandie.
1.0 June 27, 2001 Nigel Brooke Updates after formal review by Tim Bennison,
Stephen Lambie, Neall Verheyde, Juneko
Kurahashi and Bert Sandie.
2.0 June 28, 2001 Bert Sandie Formatting: added revision history, table of
contents, etc.
Radical Entertainment. All rights reserved. No part of this publication, or any software included with it may be reproduced, stored in a retrieval
system, or transmitted in any form or by any means, including photocopying, electronic, mechanical, recording or otherwise, without prior written
permission of the copyright holder.
This document contains proprietary information of Radical Entertainment. The contents are confidential and any disclosure to persons other than
the officers, employees, agents, or subcontractors of the owner or licensee of this document, without prior written consent of Radical
Entertainment is strictly prohibited.
Radical Entertainment Technical Design
DA Rendering Engine 18-Oct-02
TABLE OF CONTENTS
1. Introduction..............................................................................................................................4
1.1 Scope................................................................................................................................4
1.2 Intended Audience ...........................................................................................................4
1.3 References........................................................................................................................4
6. Platform Considerations.........................................................................................................59
6.1 Supported Platforms.......................................................................................................59
6.2 Platform Compatibility ..................................................Error! Bookmark not defined.
6.3 Multi-platform Development Strategy...........................................................................59
1. INTRODUCTION
The document describes the architecture of the graphics rendering engine to be used for the Dark
Angel project. It outlines the data formats used as input to the rendering engine (and the
algorithms by which they are produced), the rendering algorithms used in the game runtime, a
high-level architectural viewer of the runtime rendering engine, and a detailed technical design
of the classes that make up the rendering engine.
1.1 Scope
This document deals with the runtime components of the rendering engine, as well as the tool
pipeline used to transform data produced by Maya, the World Builder and associated tools, into a
format usable by the runtime. It does not address the World Builder itself, or any runtime
considerations not related to graphics rendering (sound, AI, etc.).
A basic knowledge of 3D graphics algorithms and math, as well as familiarity with the Pure3D
API is assumed. Primarily this document is aimed at programmers and other technical personnel
on the project team and in support and management roles.
1.3 References
The three key features of the external design of the renderer are:
- Narrowness of interface
- Opacity
- Reuse
“Narrowness of interface” means that the game communicates with the renderer through as small
a set of function entry points as possible. This means that of necessity this interface must be very
high level, and deal in the games concepts rather than in rendering concepts. The key benefits
here is that as long as the interface is defined early and well then the renderer and the rest of the
game can develop with very little dependency on each other.
“Opacity” means that the operations of the rendering system will be completely invisible to
other game systems. Similarly the operation of the other game systems are not visible to the
renderer. The game should never reference internal rendering data structures directly, and vice
versa. The main improvement this brings is in the stability of the system. Since communication
only goes through a small set of entry points (narrowness of interface, above) with no direct
coupling, the set of inputs to the system is easy to examine, and the behavior of the system is
very reproducible. Some exceptions will, unfortunately, need to be made to this principal. In
particular the renderer will need to be in closer collaboration with Scrooby to manage rendering
the front end and overlays that a strict application of this policy would allow for. This is because
the front-end rendering exists in a separate, self-contained system, and thus the renderer will not
be issuing Pure3D calls related to front-end rendering. However this rendering needs to take
place within the context of the Pure3D scene. Thus at some point during a frame, renderer will
need to call into Scrooby to perform front-end rendering.
“Maximum state retention” means that the renderer holds as much data internally from frame to
fame as it can. Rather than the game AI sending the entire state to the renderer every frame, as a
more conventional immediate mode rendering system like Pure3D does, the AI will only send
the renderer information about the changes in the world. As mentioned above, this data would be
in a very high level formal. This give the renderer flexibility to store it’s data in whatever format
it likes (and even change formats dramatically without affecting the rest of the game). Coupled
with the ability to exploit coherence between frames, this will allow the renderer to achieve very
high performance compared to a less flexible immediate mode rendering system.
For example: In a normal immediate mode rendering system where an object was being drawn,
every frame it would be submitted for rendering along with a matrix containing its position. To
“Reuse” means that the system will be set up in such a way that either the whole system or
portions of the system can be reused in other projects. This means that code should have minimal
coupling with the rest of the game engine, and between disparate systems in the renderer itself.
The other goals of opacity and narrowness of interface are helpful in achieving this.
2.2 Loading
Loading will be based on the concept of ”bundles”. A bundle is simple a set of related data. It
could be a character, and NIS, or a room and associated geometry.
Bundles can consist of multiple .p3d (or other types) files. Each bundle will be defined using a
text file. Furthermore there will be a master list of all bundles used in the game (the manifest),
also stored as a text file.
- A requested load: an asynchronous request to load the data is initiated and will be
completed as soon as possible without interrupting gameplay. This would happen
when the game knows that a new room or object will be needed soon, be the game
needs to continue playing (and the renderer would not be able to determine this via
it’s heuristics for what data to bring in next).
- An unload: the data is removed from memory (if possible, the data may be in use by
the renderer, in which case the unload might not be performed immediately.).
However the system should be set up in such a way that load operations are optional rather than
being forced, and would be primarily used to give hints to the renderer for more efficient data
use. The reason for this is that the bundles, or possibly the manifest, should know what sort of
data resides in each bundle. So that when a particular piece of data is needed inside the renderer
(for example a particular room has become visible, or a character has been allocated) it can
determine which bundle is needed for that data, and load it automatically. This will be done
through the concept of “exports”. Each bundle definition will have a list of exports, which are
lists of the high level data objects that this bundle is used to construct. These exports will be
searchable to allow the system to find a bundle that is capable of producing the high level object
with a given name and type.
The bundles will also store dependency information, i.e. if any Pure3D objects in the current
bundle reference objects in other bundles. The loading system will utilize the user-specified store
functionality of the Pure3D loading system to enforce these dependencies, each bundle will live
in a separate Pure3D store derived object, which will be passed into the rendering system, and
each bundle will know what other bundles it can search while loading.
The loading system, despite its name, is really a data management system. Actually disk I/O and
data parsing will be performed by other code. In the case of disk management, the game will
have a unified system for handling all disk access and the renderer loading system will route
requests from files through that through that. The Pure3D loading system will generally handle
data parsing, since most of the renderer’s data is in Pure3D formats.
2.3 Rendering
Each frame the render produces a view of the current state of the game world (how that state is
arrived at is discussed in the next section). This is done in response to a single call in the main
function call interface of the rendering engine. What the rendering engine renders, and how it
renders them is discussed here.
The world geometry system is the foundation of the renderer. It draws all static objects in the
world, as well as serving as a container in which dynamic objects can be held. The fundamental
object in the world geometry rendering system is the Room.
- The polygons that make up the base immobile geometry of the room.
- Static objects that can either move along a fixed path, or be destroyed. (any object
that can be moved arbitrarily is a dynamic object, and is handled separately). For
example a door, which can only open or close.
- Information about modifiers that affect dynamic objects as they move through the
room (light & shadow volumes, fog volumes, etc..)
- A list of effects (either triggered or constantly updating) that can affect the room.
(these are added to the master effects list at load time)
- Several portal structures that define an area through which another room can be seen.
- A set of convex polyhedra that define the area of the room. These can be used to
determine whether a given point in space is inside a room. (a necessary piece of
information for the visibility calculations)
- A pointer to the room that the camera is in will be maintained by the rendering
system. This will be updated in response to camera move requests.
- All objects in the current room will be tested for visibility against the camera frustum
and rendered if visible.
- For each portal out of the room, the portal polygon is tested against the camera
frustum.
- If a portal is within the frustum a new frustum will be generated that clips the current
camera frustum against the portal polygon. Then that room is rendered using the same
technique.
A portal can also be set up that points into the same room it comes from. This is a mirror portal,
which behaves like a normal portal, except that it inverts the co-ordinate system of the camera
before continuing rendering, resulting in a mirror effect.
Portals can also have textures associated with them that are approximations of the geometry in
the room beyond them, these can be used if the room on the other side of the portal is not loaded,
or if rendering complexity for a give frame is getting too high and the system can’t afford to
render another room.
A portal may have a door associated with it. A door is a triggered world object that when in
particular states may block the portal, causing it to not be rendered through.
Dynamic objects are any object that can move though the world under AI control. This could be
a character, a prop, or a special effect.
There will be a large table of all dynamic objects that can be created in the world, indexed by
name. Each entry will contain a template that can be used to instantiate an object, including
When a dynamic object is created, a dynamic object instance is created based on this template
and placed in two other lists. The global dynamic object list is a large table of all dynamic
objects currently existing in the game world. This table is used by the render engine to dispatch
information coming from the AI to specific dynamic objects. A handle into this table is returned
to the AI from the dynamic object creation function.
Each room also has a list of all dynamic objects that are contained within it, that is used by the
renderer to handle visibility determination. As objects move from room to room, they are shifted
around between the tables belonging to those rooms.
Each dynamic object has a matrix associated with it that defines its position in the world.
Dynamic objects may also have two other optional pieces of information associated with them.
For dynamic objects that have a skeleton, they have a pose associated with them. For dynamic
objects that have an AI controlled animation associated with them, there is also a time parameter
that can be set. Some dynamic objects may also have animation information associated with
them that is updated automatically based on time. In this case the global time is essential used as
the time parameter for that object. Dynamic objects that have behavior like this are also effects,
and would also appear in the effects list mentioned below.
2.3.3 Effects
An effect is anything that has a behavior that changes over time. Effects are driven via the tame
parameter passed into the rendering system each frame. Effects by themselves do not have any
sort of drawn representation, that means that anything that is an effect will likely have a
rendering representation that is either a world object or a dynamic object.
There are two lists associated with effects, the active effect list and the inactive effect list. The
active effect list is the list of all effects that are currently running in the world. This list is
traversed each frame to update the state of animation objects based on time. The inactive effects
list contains events that are not currently active, but need to be triggered before they begin
animating. Just because an object is on the inactive list does not mean that it isn’t doing
anything. Objects on the inactive list are simply not being updated automatically.
Whenever a room is loaded, or a dynamic object that has associated effects is created, an
associated effect objects is placed on one of these two lists, depending on the nature of the effect.
For example, a animating water texture in the world would be updating all the time, and thus
would be placed on the active list immediately on room load, a piece of glass that can shatter
would be placed on the inactive list, since the shatter would only happen in response to an event.
For example, say there is a jet of steam coming from a pipe that is turned on when a player-
initiated certain event occurs (say a valve is turned). The effect would start on the inactive
effects list, and not be drawn. When the valve is turned, the AI would sent the renderer a start
event for the effect, turning on the jet. Thereafter it would proceed under render control. If it is
possible for the player to restore the valve to it’s original position, the AI could then send the
renderer a stop event to turn the jet off.
Set time events only apply to objects on the inactive list, and they directly set the time parameter
on the effect. This means that object on the inactive list are essentially under AI control, if the AI
chooses to exercise that control. An example of direct AI control of an effect might be a wall
texture that can show damage via a texture animation. As the wall becomes more damage the AI
would manually update the effect to move through the texture animation.
2.3.4 Modifiers
Modifiers are areas or objects (generally volumes) that modify dynamic objects that move
through them. These include things like lighting, shadows and fog. These are the trickiest of all
the rendering components, because they require the greatest customization of the rendering
pipeline to implement.
For lighting, each room will store a list of lights that can affect that room. There can theoretically
be a infinite number of lights in a room (in practice there will probably be a reasonable, but large
limit, say 32 or 64). For each dynamic object the lighting system will generate an influence set of
lights. That is a set of lights that conforms to the hardware limits of the platform that
approximates the contributions of all lights in the scene. This may involve selecting which lights
in the scene have the greatest influence, or it may even mean generating new light that
approximate the contributions of several lights in the scene.
2.3.5 Shadowing
Shadowing will be carried out via a number of solutions. For world geometry, all lighting and
shadow information will be stored in light maps.
For characters shadows will be handled in different ways depending on the situation. For normal
gameplay, shadows will be handled by a simple sampling of the world light level to generate a
global illumination value for the object. When more detailed shadowing is needed, characters
will be able to have a projective texture shadows applied to them. Use of these will generally be
reserved for situation where dramatic lighting is artistically important and will need to be hand
built to a certain degree.
For shadows cast from characters a stencil-buffer solution will be used when detailed shadows
are needed. The PS2 does not have direct support for stencil buffer shadows, however it may be
The rendering system should be able to play NIS for non-gameplay plot and character
development. The NIS system will be very simple. It will simply use the Pure3D Scene-graph
and Multi-controller systems to display a scene, with animation, exactly as exported from Maya
by the artist responsible for creation. A simple interface will be provided to start and cancel the
playing of an NIS.
Objects drawn in and NIS will be passed though the standard rendering pipe. That is they will be
converted into a temporary object that is equivalent to a dynamic object, they will be passed
through the room structure sot that lighting and other modifiers can be applied to them, and then
on to the display for rendering. This will require a special scenegraph rendering traversal custom
to the engine to be implemented.
The current NIS manager will be reused (with some modification) to perform this task.
Each frame, the AI runs and updates the state of the renderer prior to displaying the graphics for
that frame. All these state updates calls are made through the calls into the main function call
interface of the renderer.
The renderer is completely state-driven, that means that given the same set of state calls relative
to display calls, the renderer should produce the exact same rendered frames. This is a useful
property because it makes it possible to reproduce error conditions in the renderer easily given
the correct support structure.
The major elements of state that the renderer needs to deal with are :
- Time
- Camera information
Time is the simplest input that the renderer has to deal with. For the most part time is simply
distributed among the dynamically updating components of the system (effects).
In general the renderer will assume that time is monotonically increasing at some rate, however
when an instant replay is occurring, the rendering system needs to rewind time and replay certain
events. The renderer will interpret a negative time as the start of an instant replay.
The renderer will be told at initialization time how long an instant replay it may be called upon
to deal with and will maintain appropriate internal buffers to handle backing up time. The
renderer is not responsible for storing positions and animations frames to be replayed during
instant replay. It assumes that the AI will be feeding it that information again. However it does
need to buffer certain triggering and creation destruction information. For example if an object
existed at the time that the instant replay is returning to, but has been subsequently destroyed, the
renderer should be able to resuscitate that object automatically.
The system does not deal with highlight reel or replay of long sections of time. In these cases the
AI needs to store all information relating to the playback, including, events outside the time of
the replay. The deterministic nature of the renderer is helpful here, but there is no direct support
for this sort of replay.
2.4.2 Camera
The camera defines the current position from which the view of the world is rendered. The AI
can set the properties of the camera, including the position, orientation and field-of-view.
Additionally, the AI can indicate that the movement or change on a given frame is a camera cut,
that is that the camera rather than moving between the current and last position is cutting to a
completely new location. This allows the render to optimize the visibility calculation, since more
information can be re-used from last frame if the camera is moving along a path, and more
complex visibility calculations can be avoided unless the camera has actually moved to a totally
new location. Camera cuts are also required when moving to non-contiguous locations (such as
another level).
A camera cut can also include an optional transition effect, such as a screen flash.
The AI needs to create and destroy characters and items as the game progresses. The dynamic
object creation function adds a new object to the world, based on a named template. Creation
returns a handle that can be used to modify that object later. Objects can also be destroyed, this
will remove the object from the game world completely. Destruction will include an optional
destruction state that will specify a state for the object to be transitioned to before the object is
removed (for a fade out or breaking effect).
Dynamic objects can move about the world under AI control. There are methods to set the
position of a dynamic object (including it’s pose if it is a character or other jointed object).
Objects can also have any number of visible states, defined in their bundle, where alternate
geometry can be substituted depending on game events, to simulate damage or activation. States
can include an animation to transition to the state.
As mentioned in the rendering section, effects can exist in either an active or inactive state, as
well as being able to have their state updated manually.
The rendering level needs to know what level the game is currently playing. Although rooms are
loaded asynchronously and levels could theoretically be traveled between in a seamless manner,
this is undesirable for two reasons. One is that to simplify gameplay there will generally be a
sharp transition, with an NIS or mission briefing between levels anyway. The other problem is
that each level will be stored in a different co-ordinate system, so the ordinary system for
classifying objects between rooms will not work for moving between levels.
There will be a set of events (begin/end) that signifies a level change, is taking place. When a
level change begins, all dynamic objects on the previous level will be destroyed (this does not
necessarily meant that the rendering data associated with them is unloaded, only rooms will
definitely be unloaded, textures and object templates can and should remain loaded). During the
level change objects are created and the camera can be moved to a position in the new level.
3.1 Bundles
A bundle is a package of data used by the renderer. It could consist of textures, animation data,
geometry or many other types of data.
Bundles are specified by a text file, these text files have a standardized extension (.bdl) so that
they can be easily found through directory iteration. This text file contains two kinds of
information about the bundle: what files make up the bundle, and what kind of high level data
the bundle exports (rooms, dynamic object templates, etc).
Bundles consist of a list of keywords with a number of arguments, the important keywords are
- depend : the name of another bundle that this bundle depends upon
- export : A high level rendering object to be added to the rendering state when the
bundle is loaded.
name Room02
depend TrainTex
depend InteriorTex01
file room02.p3d
export room Room02
export effect Room02-Door1
export effect Room02-Door2
export effect Room02-WindowBreak
name Thug
file thug_skeleton.p3d
file thug_skin.p3d
file thug_basetex.tga
export object Thug
The primary unit of world geometry is the room. Each room will be stored in a custom Pure3D
chunk. Most other data relating to rooms will be stored in other standard Pure3D chunks. The
All data in a given room is in a single bundle, except for texture data. Texture data may be
shader between rooms, but all other data is unique to a given room. Objects in the room need not
be given unique names, except for effects that are exported from the room.
An example chunk format for the room chunks (in p3d schema format) is as follows:
struct tlDAPortalVert
{
float x;
float y;
float z;
float u;
float v;
}
ULONG coordSystemChange;
tlMAtrix newCoordinateSystem;
string DoorEffect;
float doorThreshold;
string texture;
Chunk tlDAPortalVerts
}
ULONG NumPortals;
ULONG NumHulls;
ULONG NumModifiers;
ULONG NumLights;
Chunk tlDAPortalChunk;
Chunk tlDAConvexHull;
Chunk tlLightChunk16;
Chunk tlDAProjTexModifier;
}
Characters & props are store using standard Pure3D data chunks, namely a Pure3D skeleton, for
the base hierarchy if the object ahs one, and a drawable for the rendering structure (generally a
composite drawable in the case of characters, or a geometry in the case of props).
3.2.3 Effects
Effects will, for the most part be stored as standard Pure3D data types (custom types may need to
be added at a later time for custom game-specific effects, these requirement will be addressed
when/if they actually occur). Each effect will have as root objects a multi-controller that drives
the effect.
TODO
World lighting for DA will be done using lightmaps. A lightmap is a second texture applied to
the world containing the lighting information. This section describes how the lighting
information will be computed.
There are two major steps in building lightmaps, and the tool will be implemented in two stages:
3.4.1 Stage 1
Stage 1 is the computation of the global lighting solution. The tool takes a Pure3D file
representing the world, along with all the light sources in the world, and must compute the
lighting at every point on the surfaces of the world. The technique we will use to do this is
Toollib will be extended to include a mesh object optimized for raytracing, and basic ray/poly
intersection routines. Also, toollib will be extended to include a Photon Map object as described
in [4]. For Stage 1, the tool will simply write the lighting values into the vertex colours of the
world mesh, giving an approximation of the look of the final system.
2) Load the Pure3D file describing the world, adding each polygon into the ratyracing mesh.
3) Trace photons from each light source, storing them in the photon map until the global photon
limit has been reached.
5) For each vertex in the world, ask the photon map data structure for the illumination at that
point, and write it into the CBV. This can be done on the original meshes, deinstancing them
as needed, or if the original meshes aren’t needed, this can be done on the world mesh,
resulting in a single mesh output.
a) Are the material specifications in Maya sufficient to capture the surface properties used for
Photon mapping? Do we need a more sophisticated material model?
b) Do we need to maintain individual meshes in the world, or can we output a polygon soup of
all the polygons from all the meshes in the world? If we can return a polygon soup, the last
step is somewhat simpler.
c) How does the tool interact with the rooms/portals system for splitting up the world? There
are a few possible designs for this interaction:
ii) The lightmap tool runs on the whole world before the room tool runs.
PRO: Simple, and with globally correct lighting
CON: Artists can’t iterate on individual rooms, and must wait for a global computation.
iii) The lightmap tool runs on a given room, but using all adjacent rooms for the lighting
computation.
PRO: Iteration on single rooms is possible
iv) It’s possible to use the precomputed photon map from adjacent rooms to compute the
lighting in another room.
CON: This is probably the most complicated solution.
v) Allowing a combination of i) and ii) in the pipeline would allow artists to iterate on the
look of one room and get accurate global lighting in the full game art build.
4.1 Render::Interface
The external interface of the rendering engine will consist of a single class interface
(Render::Engine). This class interface is the only view that the rest of the game will have of the
rendering. This helps the rendering engine to meet its key design goals of opacity and
narrowness of interfaces, as well as making the system far more decoupled.
With one important exception (the front end/overlay system) all Pure3D rendering calls will take
place inside the rendering engine.
The rendering engine object is a singleton, only one can ever exist in a game. There should be
one global object, which must be accessed through a global pointer or accessor function.
To minimize the compilation coupling in the system, the rendering engine pointer exposed to the
AI will be a pure virtual base class, from which the concrete rendering engine implementation
will be derived. Because the rendering engine calls are very high level in nature, the overhead of
having those functions virtual should not be significant.
This class represents the interface to the rendering engine only. See section 5 for a description of
the implementation of the concrete rendering engine class (Engine).
class Name;
class Handle;
namespace rmt
{
class Vector;
class Matrix;
}
class tPose;
namespace Render
{
class Interface
{
public:
struct InitData
{
unsigned instantReplayTime;
};
class Callback
{
virtual void Do(void) = 0;
};
4.3 Types
4.3.1 Name
A name is a symbolic identifier that uniquely identifies an object. All objects have names
associated with them and can be looked up based on those names. Names will be implemented
using a hashing system, with full text strings being stored only in data files (and in the debug
runtime)
4.3.2 Handle
4.3.3 InitData
4.3.4 Callback
A base class for callback events. Any time the rendering engine needs to notify the rest of the
game of anything, a game object derived from Callback can be passed in to receive the
notification.
4.4.1 Setup
Initializes the rendering engine, including allocating memory and setting up internal data
structures. Since there are other that other components of the system need to access Pure3D
(front end, animation and physics), the rendering engine is NOT responsible for initializing
Pure3D. It assumes that a platform and context have already been created.
4.4.2 Shutdown
Cleans up the rendering engine, freeing all data associated with rendering.
4.4.3 Display
4.4.4 SetHUDCallback
Singe the rendering engine controls beginning and ending a frame, the overlay system (which
handles it’s own rendering) needs a way to insert itself into the rendering of a frame.
The callback object will be triggered to give the overlay system a change to render after all other
rendering has completed but before calling tContext::EndFrame
4.4.5 LoadForce
Load a bundle of data into the rendering engine, blocking until the data has been successfully
loaded.
4.4.6 LoadRequest
Load a bundle of data into the rendering engine. The request will be processed asynchronously.
4.4.7 Unload
Removes a bundle of data from the currently loaded data set. Call may not actually unload data if
it is still being used.
4.4.8 SetTime
Sets the current time of the game world, for the purpose of updating animations and effects.
Perform a level change. The data associated with the current level is discarded, and the new level
becomes the one being display.
4.4.10 BeginNIS
Statrs playing a named NIS. The NIS will play until canceled or finished. IF the hold parameter
is set, the NIS will hold on the last frame until cancel is called manually by the AI.
4.4.11 CancelNIS
4.4.12 CameraCut
Perform a camera cut. The camera is immediately moved to the new location, and undergoes a
full room classification to determine the current visibility state. An optional named camera
transition effect can be specified.
A camera cut will generally be accompanied by one or more other camera state calls. If a camera
cut is taking place, CameraCut should be called first, then other parameters set.
4.4.13 CameraSetPosition
Moves the camera to a new location in the game world. If the camera moves through a portal as
a result of this call, the visibility state of the world is updated appropriately.
4.4.14 CameraSetTarget
Set the point-of-interest for the camera to a new location in the game world.
4.4.15 CameraSetFrustrum
4.4.16 ObjectCreate
Returns a handle that can be used by the other object functions to manipulate the dynamic object.
Destroy the specified dynamic object. If the object supports state transitions, an optional state
can be specified which will be transitioned into before the object is removed from the game
world.
4.4.18 ObjectShow
Used to set the visibility state of an object. Calling this function with false will cause the object
to not be drawn, but without removing it from the game world. Useful for operations such as
turning of characters when an NIS is running.
4.4.19 ObjectSetState
4.4.20 ObjectSetLocation
Move the object to the specified location in the game world. If the object is moved through a
portal, the world rendering information will be up dated to reflect the change.
4.4.21 ObjectSetPose
If the object is of a type that has an associated pose (i.e. a character) update its pose.
4.4.22 EffectGet
4.4.23 EffectRelease(Handle);
4.4.24 EffectStart
Start playing an effect. There are two forms, one of which takes a handle returned from
EffectGet, the other a name. The name form should be used
4.4.25 EffectStop
Stop playing an effect. As with StartEfect, this can be done either by name or by handle.
4.4.26 EffectSetTime
Manually set the time for an effect. Used for effects that do not play in time but are directly
under AI control.
5.1 Overview
The organization of the rendering engine follows a pyramidal structure, at the top is
Render::Engine, the implantation of the Render::Interface class mentioned earlier. It performs
very little actual work, but is mainly responsible for dispatching request to other objects in the
system. (Hereafter, unless noted, all classes referred to exist in the “Render” namespace.
Qualifier and namespace blocks are omitted for simplicity)
Below it are the major subsystems. Each of which is responsible for one large area of
functionality. They are
- World : Manages world geometry (rooms), and co-ordinates overall rendering flow.
Each of these classes also has a number of helper class that it uses to do the actual work, these
will be detailed in the section associated with each major subsystem.
The last to classes are the base services of the renderer. These are used by all the manager
classes to handle common tasks. There are three of these :
- Display : Abstracts the display hardware, handle once-per-frame tasks, and buffering
rendering data for display.
- Debug : Handles internal debugging and profiling tasks. Does not exist in release
mode.
- The AI set up the current state of the world by calling functons in Interface.
- The Display class sets up the initial state of the render, including camera and other
global state
- The World is called into to begin rendering, each room is rendered beginning with the
room the camera is currently in, and proceeding through portals according to the
portal algoritim.
- A number of Modifiers are associated with each drawable depending on the location
in the world where it is rendered (Lights, fog, etc.).
- The Drawables are passed to the Display, which either buffers them (if deferring
drawing is need, for example if the object is translucent) or draws them immediately.
IT uses the modifiers associated with the object to update the global rendering state
prior to drawing.
5.3.1 Name
Class Definition
class Name
{
public:
Name(char*);
Name(const Name&);
Name(P3D_U64 uid);
public:
P3D_U64 uid;
#ifndef NDEBUG
char* string;
#endif
}
Public Interface
Name(char*);
Name(const Name&);
Copy constructor
Name(P3D_U64 uid);
P3D_U64 uid;
A hashed unique identifier, to allow names to be easily searched and compared without
string operations.
char* string;
5.3.2 List
List is used throughout the class definitions as a shorthand for a arbitrary length list (i.e.
on that can have elements added and removed on the fly. The exact implementation
should be shared with the rest of the project, possibly the Vector class
(code\util\vector.hpp)
Declaration
class Engine
{
public:
void Setup(const InitData&);
void Shutdown(void);
void Display(void);
void SetOverlayCallback(Callback*);
void SetTime(Time&);
void LevelChange(Name&);
void CameraCut(Name&);
void CameraSetPosition(const RadicalMathLibrary::Vector&);
void CameraSetTarget(const RadicalMathLibrary::Vector&);
void CameraSetFrustrum(float fov);
Handle& ObjectCreate(Name&) = 0;
void ObjectShow(bool show);
void ObjectDestroy(Handle&, const Name& destructionState);
void ObjectSetState(Handle&, const Name&);
void ObjectSetLocation(Handle&, RadicalMathLibrary::Matrix&);
void ObjectSetPose(Handle&,tPose*);
Handle& EffectGet(Name&) = 0;
Handle& EffectGet(Handle& object, Name&) = 0;
void EffectRelease(Handle&);
void EffectStart(Handle&);
void EffectStop(Handle&);
void EffectStart(Name&);
void EffectStop(Name&);
void EffectSetTime(Handle&,Time&);
private:
Callback* overlayCallback;
World* world;
ObjectManager* objectManager;
EffectManager* effectManager;
NISPlayer * nis;
Loader* loader;
Display* display;
#ifndef NDEBUG
DebugOverlay* debugOverlay;
#endif
Public Interface
Identical to Render::Interface.
Private data
Callback* overlayCallback;
ObjectManager* objectManager;
EffectManager* effectManager;
NISPlayer * nis;
Loader* loader;
Display* display;
Remarks
5.4.1 World
Declaration
class World
{
void Display(void);
void Display(Name&);
void AddObject(Object*);
void MoveObject(Object*, rmt::Vector& position);
bool IsRoomLoaded(Name&);
void LoadRoom(Name&);
void ReleaseRoom(Name&);
protected:
Room* currentRoom;
List<Room*> rooms;
}
Public Interface
void Display(void);
Display the world. It calls into the Display to generate a starting view volume. Then
starting from the current room it renders geometry and objects and recurses through
portals into adjacent rooms.
void Display(Name&);
Tell the WorldManager that the camera has moved, allowing it to update the current
room pointer.
void AddObject(Object*);
Add an object to the world. This will classify it into the appropriate room.
void RemoveObject(Object*);
void MoveObject(Object*);
Notify the world manager that an object has moved. The position is read from the object
and change in room location will be made.
bool IsRoomLoaded(Name&);
void LoadRoom(Name&);
Request the load of a room. This will tell the loading system to load the room into
memory, add it to the room list, and register any effects associated with the room with the effects
system.
void ReleaseRoom(Name&);
Request the unloading of a room. The room and any effects associated with it are
removed from the appropriate list and the memory associated with the room is freed..
Private Date
Room* currentRoom;
List<Room*> rooms;
5.4.2 Room
Public Interface
class Room
{
Name& GetName();
bool IsPointInside(rmt::Vector&);
Modifier* CalcLighting(Object*);
private:
Name name;
tCompositeDrawable* baseGoemetry;
int nObjects;
int nUsedObjects;
Object** objects;
int nPortals;
Portal* portals;
int nHulls;
ConvexHull* hulls;
int nModifiers;
Modifier* modifier;
int nLights;
Light* lights;
}
Public Interface
Name& GetName();
Draw the room, using the specified hull as the viewing frustrum.
bool IsPointInside(rmt::Vector&);
Modifier* CalcLighting(Object*);
Calculate a modifier that represents the lighting state of an object in the room.
Private Members
Name& name;
tCompositeDrawable* baseGoemetry;
int nObjects;
int nUsedObjects;
Object** objects;
int nPortals;
Portal* portals;
int nHulls;
ConvexHull* hulls;
int nModifiers;
Modifier* modifier;
int nLights;
Light* lights;
5.4.3 Plane
Declaration
class Plane
{
public:
Plane(const rmt::Vector& normal, float D);
Plane(const rmt::Vector& p1, const rmt::Vector& p2, const rmt::Vector& p3);
Plane(const Plane& plane);
rmt::Vector normal;
float D;
};
Public Interface
Copy constructor
rmt::Vector normal;
float D;
Remarks
This class may be allocated a lot during a frame, due to it’s use in the view volume code.
It will need overloaded new and delete operator that allows it to be allocated out of a recycled
memory pool.
5.4.4 ConvexHull
Declaration
class ConvexHull
{
ConvexHull(rmt::Matrix& camera, float near, float far, float fov, float
aspect);
ConvexHull(const ConvexHull&);
ConvexHull(int nPlanes, Plane* planes);
private:
unsigned nPlanes;
Plane* planes
}
Public Interface
ConvexHull(const ConvexHull&);
Copy constructor
Private Data
unsigned nPlanes;
Plane* planes
5.4.5 Portal
Declaration
class Portal
{
unsigned GetVertexCount(void);
const rmt::Vector* GetVertices(void);
bool HasCoordinateChange(void);
const rmt::Matrix& GetCoordinateChange(void);
bool HasPortalTexture(void);
bool Display();
bool IsDoorOpen(void);
private
int nVerts;
rmt::Vector* verts;
Name& otherRoom;
bool coordinateSystemChange;
Rmt::Matrix newCoordinateSystem;
float doorThreshold
Effect* doorEffect;
tTexture* texture;
pddiVector2* uvs;
}
Public Interface
unsigned GetVertexCount(void);
bool HasCoordinateChange(void);
Retrieve information about any coordinate system change in the portal. A coordinate
system change occurs when the room on the other side of the portal is not in the same world
space co-ordinate system, as the current room. An example would be a portal that stitches two
separately modeled sections of a level together,
bool Display();
Draw the portal itself (i.e. the portal polygon with the simplification texture applied)
bool IsDoorOpen(void);
Private Data
int nVerts;
rmt::Vector* verts;
Name& otherRoom;
bool coordinateSystemChange;
Rmt::Matrix newCoordinateSystem;
Effect* doorEffect;
A reference to the effect that controls the door that covers this portal.
float doorThreshold
A reference value used to determine if the door effect has gotten far enough for the portal
to be revealed.
tTexture* texture;
The portal simplification texture. A texture that has a reasonable representation of the
room beyond painted on it.
pddiVector2* uvs;
5.4.6 Modifier
Represents an area that modifies the rendering of other objects in the scene. Used by the renderer
to alter state associated with a drawable. An abstract base class.
Class Declaration
class Modifier
{
public:
enum Type =
{
SHADER,
GLOBAL_STATE
}
Public Interface
enum Type
Describes what the renderer should do with the modifier (i.e. what subtype it should cast
it into and what functions on that subclass it needs to call..
5.4.7 GlobalStateModifier
Class Declaration
class GlobalStateModifier : public Modifier
{
public:
virtual Type GetType(void) { return GLOBAL_STATE; }
virtual void PreRender(void) = 0;
virtual void PostRender(void) = 0;
Public Interface
5.4.8 LightModifier
Class Declaration
class LightModifier : public GlobalStateModifier
{
public:
void PreRender(void);
void PostRender(void);
protected:
int nLights;
tLight** lights;
}
Public Interface
void PreRender(void);
Private Members
int nLights;
tLight** lights;
Class Declaration
class ShaderModifier : public Modifier
{
public:
virtual void Set(tShader*) = 0;
virtual void Unset(tShader*) = 0;
}
Public Interface
Undo the shader modification (Note, Pure3D does not currently allow data to be read
back from shaders. This is an issue that we may need to keep an eye on)
5.4.10 ProjectedShadowModifier
Class Declaration
class ProjectedShadowModifier : public ShaderModifier
{
public:
void Set(tShader*);
void Unset(tShader*);
protected:
tTexture* tex;
rmt::Matrix projection;
}
Public Interface
void Set(tShader*);
void Unset(tShader*);
Private Members
tTexture* tex;
rmt::Matrix projection;
5.5.1 ObjectManager
Class Declaration
class ObjectManager
{
unsigned CreateObject(Name&);
Object* GetObject(unsigned);
void DestroyObject(unsigned);
private:
List<Object*> objects;
ObjectFactory* factory;
Public Interface
unsigned CreateObject(Name&);
Create a new object from a template with the specified name, returning a handle.
Object* GetObject(unsigned);
void DestroyObject(unsigned);
Destroy an object
Private Members
List<Object*> objects;
ObjectFactory* factory;
5.5.2 ObjectFactory
class ObjectFacory
{
public:
void AddObjectTemplate(Name&, ObjectTemplate*);
void DeleteObjectTemplate(Name&);
Object* CreateObject(Name&);
protected:
List<ObjectTemplate*> templates;
}
Public Interface
Add an object template. Called by the loading system when a bundle containing a
dynamic object is loaded.
void DeleteObjectTemplate(Name&);
Remove an object template. Called by the loading system when a bundle containing a
template is unloaded.
Object* CreateObject(Name&);
Private Members
List<ObjectTemplate*> templates;
5.5.3 ObjectTemplate
Class Declaration
class ObjectTemplate
{
public:
Name& GetName(void);
Object* Create(void);
protected:
Object* base;
DuplicationInformation dup;
Public Interface
Name& GetNeme(void);
Object* Create(void);
Private Members
Object* base;
An object that serves as the basis for creating new ones. For simple objects a new
instance will be just a bitwise copy of this object. For other objects it may need to partially
duplicate some of the hierarchy of objects rooted in this object, or associate special Modifiers
with the object to swap certain objects right before rendering occurs.
DuplicationInformation dup;
5.5.4 Object
Class Declaration
class Object
{
public:
tDrawable* GetDrawable(void);
void Display(void);
void TriggerState(Name&);
bool IsShowing(void);
void Show(bool);
protected:
tDrawable* drawable;
tPose* pose;
StateInformation state;
ObjDuplicationInformation dup;
Public Interface
tDrawable* GetDrawable(void);
void Display(void);
void TriggerState(Name&);
bool IsShowing(void);
void Show(bool);
Private Members
tDrawable* drawable;
tPose* pose;
StateInformation state;
ObjDuplicationInformation dup;
5.6.1 EffectManager
Class Declaration
class EffectManager
{
public:
void AddEffect(Effect*, bool active);
Effect* GetEffect(Name& master, Name& name);
void RemoveEffect(Effect*);
void RemoveGroup(Name& master);
void Tick(Time&);
void Activate(Effect*);
void Deactivate(Effect*);
protected:
List<Effect*> activeEffects;
List<Effect*> inactiveEffects;
}
Public Interface
Add an effect the system, placing it on the appropriate list depending on the value of
active.
Retrieve an effect by name, and optionally name of master (master can be empty)
void RemoveEffect(Effect*);
Remove all effects with the same master from the system.
void Tick(Time&);
void Activate(Effect*);
Activate an effect.
void Deactivate(Effect*);
Deactivate an effect
Private Members
List<Effect*> activeEffects;
List<Effect*> inactiveEffects;
5.6.2 Effect
Class Declaration
class Effect
{
public:
Name& GetName(void);
Name& GetMaster(void);
protected:
Name name;
Name master;
}
Public Interface
Name& GetName(void);
Name& GetMaster(void);
Returns the name of the effects master object. Either a dynamic object or a room.
Private Members
Name name;
Name master;
5.6.3 AnimEffect
Class Declaration
class AnimEffect
{
public:
virtual void SetTime(void);
protected:
tFrameController* controller;
}
Public Interface
Sets the time parameter for the effect, by calling advance or set time on the controller.
Private Members
tFrameController* controller;
5.7.1 NISPlayer
Class Declaration
class NISPlayer
{
void BeginNIS(Name&);
void CancelNIS(void);
void Display(void);
bool IsPlaying(void);
protected:
void DisplayScenegraph(Scenegraph::Scenegraph*);
bool playing;
tMultiController* controller;
Scenegraph::Scenegraph* scenegraph;
Public Interface
void BeginNIS(Name&);
Find the NIS with the given name and begin playing it.
void CancelNIS(void);
void Display(void);
bool IsPlaying(void);
Private Members
void DisplayScenegraph(Scenegraph::Scenegraph*);
Draw a scenegraph, by passing each node through the Render::Display rendering pipe
rather than the standard Pure3D drawing pipe;
bool playing;
tMultiController* controller;
Scenegraph::Scenegraph* scenegraph;
The scenegraph representing the state of the currently playing NIS scene
5.7.3 Loader
This class manages loading. It parses the manifest and generates a list of all bundles available for
loading, it can then be directed to load or unload a bundle from memory.
Class Definition
class Loader
{
public:
enum Type =
{
ROOM,
OBJECT,
EFFECT,
NIS
};
bool IsBundleLoaded(Name&);
bool DoesBundleExist(Name&);
void LoadBundle(Name&);
Bundle* GetBundle(Name&);
void UnloadBundle(Bundle*);
protected:
struct BundleStub
{
Name name;
Char* realName;
Bundle* bundle;
char* fileList;
in nDependancies;
Name* dependancyList;
struct Export
{
Name name;
Type type;
};
int nExports
List<Export> exports;
}
List<BundleStub> masterList;
}
Public Interface
Find and object exported from a bundle by name. Returns that name of the bundle in
which the object
Parse the specified bundle definition file and add all bundles defined by it to the master
list.
bool IsBundleLoaded(Name&);
bool DoesBundleExist(Name&);
Check if the named bundle exists (i.e. has an entry in the master list of bundles.
void LoadBundle(Name&);
Bundle* GetBundle(Name&);
void UnloadBundle(Bundle*);
struct BundleStub
A single available bundle. Holding a reference to the bundle object (if it is loaded) and
the information on how to load it (filename and exports).
struct BundleStub::Export
List<BundleStub> masterList;
5.7.4 Bundle
A unit of related loaded rendering data. It manages all the Pure3D data associated with
Class Definition
class Bundle : public tEntityStore
{
public:
tEntity* Find(tSafeEntityCasterBase& c, const tUID uid);
void Store(tEntity* obj);
protected:
friend class Loader;
void SetSearchDependancies(bool);
tEntityTable* data;
bool searchDependancies;
int nDependancies;
Bundle** dependancies
}
Public Interface
Private Members
void SetSearchDependancies(bool);
bool searchDependancies;
Set whether or not search requests should search dependant bundles. Should only be
turned on during loading of the bundle.
tEntityTable* data;
int nDependancies;
Bundle** dependancies
5.8.1 Display
Class Definition
class Display
{
public:
void SetCamera(rmt::Matrix&, float fov);
void PushCameraChange(rmt::Matrix&);
void PopCameraChange(void);
void BeginFrame(void);
void BeginScene(void);
void EndScene(void);
void BeginOverlay(void);
void EndOverlay(void);
void EndFrame(void);
void Display(Drawable*);
protected:
tView* view;
tCamera* camera;
tMatrixStack cameraChanges;
Public Interface
void PushCameraChange(rmt::Matrix&);
void PopCameraChange(void);
Change the position of the camera (in response to a portal with a camera change
associated with it)
ConvexHull& GetCameraFrustum(void);
void BeginFrame(void);
void EndScene(void);
End drawing the 3D portion of the scene, this wil also render any buffered drawables.
void BeginOverlay(void);
void EndOverlay(void);
void EndFrame(void);
void Display(Drawable*);
Private Members
tView* view
tCamera* camera;
tMatrixStack cameraChanges;
List of buffered drawables for sorting and rendering at the end of a scene.
5.8.2 Drawable
A single drawable object that the visibility system has determined needs to be renderered. A lot
of Drawables will be allocated and
bool SetTranslucency(bool);
void AddModifier(Modifier*);
protected:
tDrawable* drawable;
rmt::Matrix transform;
bool translucency;
unsigned nModifiers;
Modifier* modifiers[MAX_MODIFIERS];
}
Public Interface
enum Type
The type of the drawable object. Supplied so that dynamic_cast can be avoided. Some
effects may not render properly if type is no specified correctly.
Overloaded new and delete, Drawables will be allocated and deallocated a lot during a
frame, so an overloaded new and delete will be necessary for performance and to combat
fragmentation;
Constrct a drawable from a Pure3D tDrawable and a positioning matrix and a type
bool SetTranslucency(bool);
Set the translucency of the object, controls whether or not the object is buffered.
void AddModifier(Modifier*);
tDrawable* drawable;
rmt::Matrix transform;
bool translucency;
unsigned nModifiers;
Modifier* modifiers[MAX_MODIFIERS];
5.9.1 DebugOverlay
Class Declaration
class DebugOverlay
{
public:
void Display(void);
void Tick(Time&);
private:
struct DebugMessage
{
DebugMessage() : time(0) {};
int time;
char message[STRING_SIZE];
} debugMessages[MAX_MESSAGES];
unsigned nextMessage;
char statusLines[MAX_STATUS][STRING_SIZE];
Public Interface
void Display(void);
void Tick(Time&);
Print a temporary message to the screen (it will remain onscreen until “time” has
elapsed).
Private Members
debugMessages[MAX_MESSAGES];
unsigned nextMessage;
char statusLines[MAX_STATUS][STRING_SIZE];
6. PLATFORM CONSIDERATIONS
The platforms that the project is targeted at are the XBox and Playstation2. There is a possibility
that the project may also require a GameCube version at some point.
There should also be an internally maintained Win32 version. The Win32 platform has a number
of advantages as a development environment, including superior debugging and profiling tools,
ease of use, and ubiquity.
A PC Version of the rendering engine, packaged separately from the main game could serve as a
useful artist viewing tool.
The code for the PS2 and Xbox will be developed simultaneously.
7. DEVELOPMENT SCHEDULE
7.1 Risks
The key technical risk is the implementation of the lighting system. Radical does not currently
have any games which require particularly advanced lighting solutions, so there is no a holistic
system that currently exists for handling lighting and shadowing of dynamic and fixed objects in
a consistent manner. Some research needs to go into testing and evaluating the lighting options
before a final solution is chosen.
The PS2 in particular makes this a risky proposition, since it lacks some rendering features that
will make solutions to the lighting problems easier on the XBox.
7.2 Dependencies
Being the rendering system, the major dependencies in the project are on Pure3D. These include:
The first stage of development is to get the renderer test framework up and running. This will be
a small simple application that compiles on all supported platforms and provides basic input
support, and hooks for bolting in renderer test cases. This phase also includes defining the class
headers for the external interface of the renderer and implementing initialization, a simple
loading system, and a simple rendering system (not the ones that will be used in the final system,
but again the goal is to get something running as fast as possible and then never break it).
Once this basic system is in place, development on the rendering engine can begin in earnest.
The major systems that need to be dealt with in approximate order if importance/depandance are
:
- Loading
- Portal rendering
- Effects
- NIS
- Facial animation
Developing useful test cases for runtime systems is quite difficult, due mainly to the difficulty of
examining the output of the system in a controlled way. There are a number of measures that can
be implemented to verify the correct operation of the renderer.
- A test level : The simplest test available. A small level dedicated to exercising the
features of the renderer, where all code paths available in the rendered will be tested
on a very short walk through of the system.
- A separate executable for the render: As mentioned in the development plan, the
renderer will be initially developed separate from the game code. IF this test
application is maintained, it can be used to test the functionality of the renderer
independent of the game. This removes many variables from testing by not having
code from elsewhere in the game that may itself contain bugs from interfering with
the testing of the renderer.
- A journaling system: Because of the way the interface to render is designed, it would
be a simple manner to intercept all calls into the rendering system and record them.
This journal of a rendering session could then be played back at a later date and
examined in more detail.
The first to systems are fairly simple to implement, and the second is already worked into the
development plan. The third testing aid (journaling) is a little more tricky, and will at the very
least not be pursued in the initial implementation.
The high level estimate for a completed rendering engine with all of the features and
functionality outlined in this document is approximately 80 person days. The work will take on a
number of distinct phases and support for Xbox and PS2 will be performed simultaneously.