Computer Graphics
Summer 2006

Tuesdays, 6pm - 8:20pm
WWH 101

Assigment 5
DUE: Thursday 8/10/06 midnight (11:59pm)

Note that the deadline above is FINAL, and I cannot grant any extensions since I need to submit the grades by next day.

In this assignment you will implement a simple ray tracer.  You will implement support for camera, several primitives (sphere, box, polygon, cylinder, quadric) and  shadows.

The input language of the ray tracer will be a subset of the geometry file format of POV-Ray raytracer.  It is strongly recommended that you install POV-Ray or at least the documentation. See below for links.

Your program should read a file specified on the command line and create an image of the scene described in the file using ray casting. A simple parser for POV-Ray format is provided below. Feel free to modify it for your purposes. In addition to the name of the file, on the command line you should be able to specify the size of the output image.  Since the complexity of the algorithm increases with number of pixels present in the image, you may want to start with small images first.

For shadows, follow the most basic shadowing approach: Allow one light in the scene. If the light is blocked, as viewed from the intersection point on the object, color the pixel black. Else return color of object.

In your submission please include images rendered by your ray tracer for the test cases provided below as well as your source code.

The POV-Ray File Format

The input file format is a subset of the POV-Ray file format. When in doubt, consult the POV-Ray manual (link below). The file in POV-Ray format is a sequence of object descriptions. You should implement the following POV-Ray objects: polygon, box, sphere, cylinder, quadric.

Comments. Two slashes are used for single line comments in the file. Anything on a line after a double slash // is ignored by the ray tracer. For example:
// This line is ignored
sphere { <0,0,0>, 1 } // a sphere of radius 1 centered at the origin

Objects. The basic syntax of an object is a keyword describing its type, some floats, vectors or other parameters which further define its location and/or shape and some optional object modifiers such as pigments or transformations. In this assignment the only modifiers that we consider are transformations. 

Sphere. The syntax for a sphere is:

sphere { <CENTER>, RADIUS }

where <CENTER> is a 3d point, and RADIUS is a real value specifying the radius. You can also add  translations, rotations, and scaling to the sphere. For example, this is a sphere of radius 10 centered at  25 on the Y axis:

sphere { <0,0,0>, 1.0
scale 10
translate <0,25,0>

Note that a 3D vector or point is specified using angular brackets, and the components are separated by commas. Transformations can be also applied to all other objects (see details on transformations below).

Box. A simple box can be defined by listing two corners of the box like this:

box { <CORNER1>, <CORNER2> }

<CORNER1> and <CORNER2> are vectors defining the x,y,z coordinates of opposite corners
of the box. For example:

box { <0, 0, 0>, <1, 1, 1> }

Note that all boxes are defined with their faces parallel to the coordinate axes. They may later be rotated  to any orientation using a rotate parameter.  Each element of 
 <CORNER1> should always be less than the corresponding element in  <CORNER2>

Cylinder. A finite length cylinder with parallel end caps may be defined by

cylinder {

where <POINT1> and <POINT2> are the centers of the end caps, and RADIUS is the cylinder radius.

cylinder { <0,0,0>, <0,3,0>, 2}
is a cylinder of height 3 with a radius of 2, and base centered at the origin in the XZ plane.
The ends of a cylinder are closed by flat planes which are parallel to each other and perpendicular to  the length of the cylinder.

Polygon. A polygon with N vertices is defined by:
polygon { N,<VERTEX1>, <VERTEX2>, ..., <VERTEXN> }
where <VERTEXi> is a vector defining the x,y,z coordinates of each corner of the polygon.

Quadric. Quadric surfaces can produce shapes like ellipsoids, spheres, cones, cylinders, paraboloids (dish shapes), and hyperboloids (saddle or hourglass shapes). A quadric is defined in POV-Ray by:
quadric { <A,B,C>, <D,E,F>, <G,H,I>, J }

where A through J are float expressions. This defines a surface of x,y,z points which satisfy the equation:  Ax2 + By2 + Cz2 + Dxy + E xz + F yz + Gx + H y + I z + J = 0
Different values of A, B, C, . . . J will give different shapes. So, if you take any three dimensional point  and use its x, y, and z coordinates in the above equation, the answer will be 0 if the point is on the surface of the object. The answer will be negative if the point is inside the object and positive if the point is outside the object. Here are some examples:
• x 2 + y2 + z2 − 1 = 0 defines a sphere;
• x 2 + y2 − 1 = 0 defines an infinitely long cylinder along the Z axis;
• x 2 + y2 − z 2 = 0 defines an infinitely long cone along the Z axis.

Camera. The camera definition describes a camera used for viewing the scene. If you do not specify a camera then a default camera is used. The camera is defined by the following parameters: location, center, angle, up, right. The location is simply the X, Y, Z coordinates of the camera. The default location is <0,0,0>.  Up and center parameters are same as those in gluLookAt.

By default, the center point is 1 unit in Z direction from the location.The angle keyword followed by a float expression specifies the (horizontal) viewing angle in degrees. The right vector plays a limited role: the ratio of the length of this vector to the length of the up vector to determine the aspect ratio of the image. The default values are: up <0,1,0> , right <1.33333,0,0>

Warning: POV-Ray has a relatively complicated camera model with many interdependent parameters; we are using a small subset. You may end up confused if you try to sort out the complete model from the POV-Ray documentation, so I suggest using this description together with the lecture notes.

In addition to the standard parameters, a camera may have a transformation attached to it; the parser reads it, if there is one, but you are not required to implement it.  

Transformations. Three standard types of transformations can be applied in any order to any object: translate, rotate and scale. Any number of transformations can be applied in any order. The argument of rotate is a vector with three components, which are angles of rotation around x, y and z axes in degrees. The scale is a nonuniform scale and its argument is a vector of three scale factors in x,y and z directions. The translate takes the translation vector as an argument. The parser reads all transformations and converts them to 4 by 4 matrices.

Pigment. The color or pattern of colors for an object is defined by a pigment statement. For

sphere { <0,0,0>, 1
pigment {color rgb<1.0, 0.0, 0.0> }

defines a red sphere. The color you define is the way you want it to look if fully illuminated. The
parameter is called pigment because we are defining the basic color the object actually is rather
than how it looks.

We will implement only the simplest pigment type, which uses the color statement to specify
the pigment. As in the case of light sources, the color statement has the form color rgb<R, G,
B>. In addition, the color statement may have the form color rgbf<R, G, B, F>, where F stands for “filter”, and determines how transparent the object is. The default filter value is 0,
which means that the surface of the ob ject is not transparent.

Light sources. Light sources have no visible shape of their own. We will use only one type of
light sources: point lights. The syntax for a light source is:

light_source { <X, Y, Z> color rgb<R, G, B> }

Where X, Y and Z are the coordinates of the location and R, G, B are the components of the
color in the range 0 to 1. For example,

light_source { <3, 5, -6> color rgb<0.5, 0.5, 0.5> }

is a 50% white light at X=3, Y=5, Z=-6.

You can ignore the color properties of the light for the shadow implementation. All you will need for shadows is the position of the light. (s.t. you can shoot a ray towards the light).

You can implement  a basic lighting model for extra credit.

Finish. The finish statement  defines other parameters of the lighting model. In general, the
finish statement contains a number of items, that control ambient, diffuse and specular reflection.

A complete finish statement looks like this:
finish {
ambient 0.1
diffuse 0.6
phong 0.0
phong_size 40
metallic 0
reflection 0.0

Any item can be omitted. When it is the case, it is assigned a default value. The default values are
listed above.

The ambient component has effect only if there is a non-zero ambient light, which is specified using a special statement like this:

global_settings {ambient_light COLOR}

where COLOR is of the form rgb <r,g,b> as for a light source. The default is no ambient light.

Simplified lighting formula. We state a simplified formula for one component (red); the formu-
las for the other two components are obtained by replacing red with green or blue.

Assume that for a given point there are M visible lights, with colors ri , gi , bi , i = 1 . . . M . Assume that the pigment for the object was given by pigment
{ color rgb <mr ,mg ,mb >}, and the finish is
ambient kamb
diffuse kdiff
phong kspec
phong size p
reflection krefl

Further, assume that the direction to the i-th light source is Li , the direction to the eye is V , the normal is N , and the reflected direction is R. The total red intensity at the point will be given by



POV Ray Parser

Test Cases

//for image 1
camera {
location <0.5,0.5,1>
look_at <0.5,0.5,0>
angle 90
//for image 2
camera {
location <2,1,2>
right <1.5,0,0>
look_at <1,1,1>
angle 90
//for image 3
camera {
location <2,2,2>
look_at <0,0,0>
up <1,0,0>
angle 90


- If you can only get parts of the homework working, then to get partial credit submit more images, displaying the parts that do work. (For example, if you only implemented spheres, then provide me with an image of a sphere that you rendered using your ray tracer)

- See lecture notes for implementation suggestions.

- Initially make sure you can get a flat shaded version of the primitives. Once this is working properly then add shadows.

- Implement the project incrementally. Test at each step.

- Remember to never test if (bla1-bla2 == 0), always use if (bla1-bla2 <eps)  for some small epsilon(e.g 1e-12).

Adapted from one of Denis Zorin's assignments.