RuTh's  RuThLEss  HomEpAgE


3D Game DesignEnglish
Programming Links
* INDEX * 3D game to-do list * 3D Engine to-do list ( chapter 1, 2, 3, 4, 5, 6, 7 ) *
* projection formula * transformation matrices * Bresenham algorithm * Scanline Polygonfill algorithm *
* Spieldesign / game design * Troubleshooting 3D * Irrlicht 3D engine * Blender for Beginners *

3D Object Projection

Analytic Geometry?! Don't Panic

The first thing you want to be able to do in a 3D engine is to define and display 3D entities such as characters, buildings, weapons and treasures that populate your game. (The second thing is to transform and animate them, which will be discussed subsequently.) Well, if maths teachers had thought of telling us that analytic geometry is what computer games are made of, we surely would have listened more closely. Anyway. It's about time to make good use of it. :-)

Cartesian coordinate system, vector and vertex. I'm not going to go into details how vectors, vertices and the coordinate system work. For now, think of vertices as the cornerpoints of your entities in 3D space. Each corner or vertex is defined by a vector — three floating point numbers (x|y|z) that describe a fixed position in the 3D coordinate system. You can connect three vertices to form a triangle, four to form a square, or any number of them to form any polygon. You can then put together several of those polygons to form a 3D entity: The simplest example for that is to construct a cube out of 6 square polygons; other cases such as a house or person are more complex but possible.Six square polygons form one cube.

Polygons must be convex, never concave. One more thing you need to keep in mind is that with the approach I am designing this 3D-engine, there are small limitations: Polygons must always be a) flat ("coplanar"), b) convex, and c) their vertices must be defined counter-clockwise. This is an simplification which ensures that we don't get too many case differentiations so calculations will not get too time-consuming. It does not limit the kinds of shapes you can create, since you can substitute any concave shape with two or more convex shapes!


Projection means the displaying of 3D images on a 2D screen -- that's what your eyes do all day long on your retina and it's essentially what we need 3D-engines for. Projection requires three preparational steps: First you have to define the 3D entities, of course; next you might want to do some transformations on your entities as they walk or move; then you want to place them into your 3D game world. Finally, you project the coordinates to the 2D screen. Graphic: First, define the entity with local coordinates, second, transform the entity, third, place the entity into the 3D world, fourth, project the 3D world to the 2D screen.

In more detail:

  1. Define Local Coordinates: The 3D image data of each entity is defined as a list of polygon vertices in local coordinates; local coordinates pretend, that the entity described is alone in the center of its own little 3D world. Each entity first lives in its own local 3D world before it is projected in the real game world together with all the other objects.
  2. Transformed Coordinates: The next step is the transformation of the entity. You do a transformation every time when you want to move an entity to a special position in the game world ('translation') or you want it to face a certain way ('rotation'). Assume for now, that we don't need any transformation yet; for starters, we want to display the object as it is and where it is, so we don't care how transformation works until later.
  3. World Coordinates: Next we want to convert the (possibly transformed) local coordinates to world coordinates. This means nothing else than taking the entities out of their own lonely worlds and putting them all together into the big wide game world, each in the place where it belongs. Concerning calculations, this is the easiest step.
  4. Projection: The last step is the projection itself. In this step, the three coordinates (x|y|z) of each 3D vertex will be converted to a 2D vertex with only an (x|y) coordinate, which can be drawn to the screen. As you may have noticed, things appear smaller in a distance than close by; also, parallel lines seem to come together in a distance (cf. railroad tracks). There is an easy formula to simulate the effect of this dependency of the x- and y- upon the z-coordinate (depth) -- it's the projection formula.

The Projection Formula

sx = wx * c / wz
sy = wy * c / wz
The Projection Formula projects 3D vertex's world coordinates ( wx | wy | wz ) to its 2D screen coordinates ( sx | sy ). The formular zooms the outcome by multiplying everything with the zoom constant c (choose 200 < c < 400).

Implementation in Objective C (Apple Cocoa)

Now on to the implementation. Basically, you will need (at least) three C objects to describe 3D entities: A vertex object, a polygon object, and an entity object. Each entity consists of a list (array) of polygons; each polygon consists of a list of vertices and has a color; each vertex has three floating point number coordinates.

The MYVertex Object

We heard about the four steps of projection -- there are local, transformed, world and screen coordinates. The definition of coordinates happens in the vertex object. Let's call the vertex object MYVertex. MYVertex needs the following float instance variables:

  • ( lx | ly | lz ) to store the local coordinates of the vertex.
  • ( tx | ty | tz ) to store the transformed coordinates of the vertex.
  • ( wx | wy | wz ) to store the world coordinates of the vertex.
  • ( sx | sy ) to store the screen coordinates of the vertex. Note these are 2D.
  • I also define four more temporary float variables lt, tt, wt and st that will be needed during the transformation. Ignore those for now.

Of course I also write the accessor methods for the vertex object: methods to get and set the instance variables, and (for convenience) methods adding a value to them or multiplying them by a value, respectively. The vertex initializer method sets the three local coordinates to the init:'s arguments, the four temporary variables to 1.0, and all the other variables to 0.0. No dealloc method is necessary, since the data is all just primitive floats.

The MYPolygon Object

The next object to be implemented is the polygon object, let's call it MYPolygon. The polygon object has an array of vertices (given in counter-clockwise order) and a color, which I represent by one of Cocoa's NSColor objects. Additionally, a polygon has a position in 3D space, the so-called origin, which is given by three floats ( oriX | oriY | oriZ ) and their accessor methods.

MYPolygon also has a draw method, that loops through the screen (!) coordinates of each of the vertices in the array and draws a closed and filled NSBezierPath object. Don't forget the dealloc and the initializer which sets the initial color and the vertex list.

- (void) draw:(NSRect)cliprect
    NSBezierPath   *	polygon = [NSBezierPath bezierPath];
    int 		v=0;

    [[self color] set];
    /* Define start point */
    [polygon moveToPoint:[self screenCoordOfVertexAtIndex:0]];
    /* loop: Connect all points */
    for (v=1;v<[self numOfVertices];v++){
	[polygon lineToPoint:[self screenCoordOfVertexAtIndex:v]];
    /* draw and fill the polygon */
    [polygon closePath];
    [polygon fill];     
    /* or for wireframe use [polygon stroke]; */
    [polygon removeAllPoints];

The MYEntity Object

Last we take care of the entity object which will be named MYEntity. MYEntity has a list of polygons that constitute the entitity, and also its own position, that is origin point ( oriX | oriY | oriZ ). It has a draw method that loops over all the polygons in the polygon array and calls their draw method, an obvious init and dealloc method, and a couple of necessary accessors.

But that's not all -- MYEntity is where the calculation of the transformed coordinates, the world coordinates and the projected screen coordinates is initiated. Therfore there are three special methods, transformation, toWorldCoordinates and projection.

Transformation is a complex issue, since I need to introduce matrices first. I will explain the real transformation later, for now I only give you a temporary fake transformation method that does nothing but copy the local coordinates to the transformed coordinates. Nothing happens here...

- (void) transformation
  /* Does nothing yet, just copies local coordinates the vertix'es 
   * transformation variables tx,ty,tz,tt.
   * The real transformation will be handed in later.
  int p,v; MYVertex* theVertex; MYPolygon* thePolygon;
  int numOfPolygons=[self size];
  for(p=0;p < numOfPolygons;p++)
      thePolygon = [self polygon:p];
      int numOfVertices=[thePolygon groesse];
      for(v=0; v < numOfVertices; v++)
         theVertex=[thePolygon vertex:v];
         [theVertex setTX:[theVertex lx]]; // only fake!
         [theVertex setTY:[theVertex ly]]; // only fake!
         [theVertex setTZ:[theVertex lz]]; // only fake!
         [theVertex setTT:1.0];            // only fake!

The conversion to world coordinates places the object in its position in the 3D game world. The calculations are easy: Just add the entity's origin coordinates ([self oriX], [self oriY], [self oriZ]) to the vertices' transformed coordinates (tx, ty, tz) and store the result in the world coordinate variables (wx, wy, wz).

- (void) toWorldCoordinates
    /* Converts the transformed coordinates to world coordinates.
     * Stores results in wx, wy, wz. 
    int p,v; 
    int numOfPolygons=[self size];
    for(p=0;p < numOfPolygons;p++)
      MYPolygon* thePolygon = [self polygon:p];
      int numOfVertices=[thePolygon size];
      for(v=0;v < numOfVertices;v++)
          MYVertex* theVertex = [thePolygon vertex:v];
          [theVertex setWX:([theVertex tx]+[self oriX])];
          [theVertex setWY:([theVertex ty]+[self oriY])];
          [theVertex setWZ:([theVertex tz]+[self oriZ])];

Now the blowoff, the projection. This method implements the improved projection formula shown above (sx=wx*c/wz, sy=wy*c/wz). The constant c is defined with #define c 400.0 in the header; you may want to choose to turn this constant into a variable later to take advantage of the 'zooming' effects you gain by changing it. Apart from implementing the projection formula, the method also centers the screen coordinates in the last step.

- (void) projection:(NSRect)rect
    /* Converts 3D world coordinates to 2D screen coordinates 
    * Stores results in the vertices' variables sx, sy.
    int p,v;
    float w = rect.size.width*0.5; 
    float h = rect.size.height*0.5;
    int numOfPolygons=[self size];
    for(p=0;p < numOfPolygons;p++)
      MYPolygon* thePolygon = [self polygon:p];
      int numOfVertices=[thePolygon size];
      for(v=0;v < numOfVertices;v++){
          MYVertex* theVertex = [thePolygon vertex:v];
          float depth=[theVertex wz];
          /* projection */
          if(depth==0.0) depth=0.0000000001; /* don't div by zero! */
          [theVertex setSX:(([theVertex wx]*c)/depth)];
          [theVertex setSY:(([theVertex wy]*c)/depth)];
          /* center */
          [theVertex addToSX:w];
          [theVertex addToSY:h];

Culling of Backfacing Polygons

That's almost it. If you defined a test entity now and drew it to the screen, you'd get a weird result: The back of the entity would be visible in the front. Why? Well, nobody told the drawing methods not to draw the backside of entities, right? What we need is one more step of optimization, which is called culling of backfacing polygons.

The following method goes into the MYPolygon object: It looks at the first three vertices' world coordinates, calculates their cross product and dot product and thus determines whether the polygon is backfacing or not in relation to the viewer. It is considerd given that the viewer stands in the worlds origin (0|0|0) and looks down the z-axis. (If you don't know what a dot product or a cross products is -- just trust Tieskoetter and Decartes.)

- (BOOL) isBackfacing
    float cullMe,x1,x2,x3,y1,y2,y3,z1,z2,z3;
    MYVertex* v0,*v1,*v2;
    v0=[self vertex:0];
    v1=[self vertex:1];
    v2=[self vertex:2];
    x1 = [v0 wx]; x2 = [v1 wx]; x3 = [v2 wx];
    y1 = [v0 wy]; y2 = [v1 wy]; y3 = [v2 wy];
    z1 = [v0 wz]; z2 = [v1 wz]; z3 = [v2 wz];
    cullMe = x3  * ((z1*y2)-(y1*z2)) +
	     y3  * ((x1*z2)-(z1*x2)) +
	     z3  * ((y1*x2)-(x1*y2)) ;
    return (cullMe < 0.0);

Now adapt the draw method of MYEntity to test each polygon before drawing it; MYEntity has to reject drawing polygons which face away from the viewer and therfore are not visible at all.

That's it! Define a test entity (for instance a cube), transform, convert and project it, then draw it to the screen from your custom NSView's drawRect method. Here is sample code for how to create a cube as a test entity.

Object Creation Sample Code

typedef struct _MYPoint {
    float x;
    float y;
    float z;
} MYPoint;

+ (MyEntity*) createCubeAt:(MYPoint)loc  center:(MYPoint)j
                         x:(float)x y:(float)y z:(float)z
    // Create the eight corner vertices of the cube
    MYVertex* a=[[MYVertex alloc] initWithX:0-j.x y:y-j.y z:0-j.z];
    MYVertex* b=[[MYVertex alloc] initWithX:x-j.x y:y-j.y z:0-j.z];
    MYVertex* c=[[MYVertex alloc] initWithX:x-j.x y:0-j.y z:0-j.z];
    MYVertex* d=[[MYVertex alloc] initWithX:0-j.x y:0-j.y z:0-j.z];
    MYVertex* e=[[MYVertex alloc] initWithX:0-j.x y:y-j.y z:z-j.z];
    MYVertex* f=[[MYVertex alloc] initWithX:x-j.x y:y-j.y z:z-j.z];
    MYVertex* g=[[MYVertex alloc] initWithX:x-j.x y:0-j.y z:z-j.z];
    MYVertex* h=[[MYVertex alloc] initWithX:0-j.x y:0-j.y z:z-j.z];
    // initialize six lists with those vertices (anti-clockwise)
    NSArray *vlist1 = [NSArray arrayWithObjects:a,e,f,b,nil];
    NSArray *vlist2 = [NSArray arrayWithObjects:f,g,c,b,nil];
    NSArray *vlist3 = [NSArray arrayWithObjects:d,c,g,h,nil];
    NSArray *vlist4 = [NSArray arrayWithObjects:a,d,h,e,nil];
    NSArray *vlist5 = [NSArray arrayWithObjects:b,c,d,a,nil];
    NSArray *vlist6 = [NSArray arrayWithObjects:e,h,g,f,nil];
    // construct six squares from those vertex lists
    MYPolygon *square1 =
	[[MYPolygon alloc] initWithVertexList:vlist1 
                           color:[NSColor greenColor]];
    MYPolygon *square2 =
	[[MYPolygon alloc] initWithVertexList:vlist2 
                           color:[NSColor yellowColor]];
    MYPolygon *square3 =
	[[MYPolygon alloc] initWithVertexList:vlist3  
                           color:[NSColor orangeColor]];
    MYPolygon *square4 =
	[[MYPolygon alloc] initWithVertexList:vlist4  
                           color:[NSColor redColor]];
    MYPolygon *square5 =
	[[MYPolygon alloc] initWithVertexList:vlist5  
                           color:[NSColor magentaColor]];
    MYPolygon *square6 =
	[[MYPolygon alloc] initWithVertexList:vlist6  
                           color:[NSColor blueColor]];
    // initialize an entity with this list of squares
    NSMutableArray *plist = [NSArray arrayWithObjects: 
    // construct a cube from this list
    MYEntity *cube = [[MYEntity alloc] initWithPolygonList:plist];

    [a retain]; [b retain]; [c retain]; [d retain];
    [e retain]; [f retain]; [g retain]; [h retain];
    [square1 retain]; [square2 retain]; [square3 retain];
    [square4 retain]; [square5 retain]; [square6 retain];
    [vlist1 retain]; [vlist2 retain]; [vlist3 retain];
    [vlist4 retain]; [vlist5 retain]; [vlist6 retain];
    [plist retain]; [cube retain];
    [cube setOriX:loc.x]; [cube setOriY:loc.y]; [cube setOriZ:loc.z];
    return cube;

The projection you just implemented displays static 3D entities on the screen. Next, read how to transform entities before projection.