Ray tracing is a way to render a scene
in object space, rather than image space.
The fundamental idea is to shoot a ray
into the scene at every pixel.
Whatever object in the scene this ray hits first,
that is the object visible at that pixel.
We can define a ray using an origin point V
and a unitlength direction vector W.
Any point on the ray can be described by:
V + t W for t >= 0
Note that points where t < 0 are not part of the ray,
because those points are behind the ray origin V,
not in front of it.
In object space, we can describe the image plane itself
as a square, with x==1 along the left edge, x==+1
along the right edge, y==1 along the bottom edge,
and y==+1 along the top edge. The value of z is zero.
As it happens, that is exactly the range of values
for vPos in the shader programs you are writing.
Since the observer is in front of the square,
and our coordinate system follows the right hand rule,
she will be located on a point along the z axis,
in positive z.
Adapting the convention of photography, we use
the variable f  for focal length  for the
distance of the observer from the image plane.
We say that observer is in location:
V = [0,0,f]
To form a ray through any pixel [x,y] of the image,
we first take the different vector from that
pixel to the observer:
[x,y,f]
Then we normalize that vector to get the ray direction:
W = normalize([x, y, f])
If we were to write the same thing as shader language code it would look like this:
vec3 W = normalize(vec3(vPos.x, vPos.y, f));
Any point along this ray can be described by:
[V_{x} + t W_{x}, V_{y} + t W_{y}, V_{z} + t W_{z}], where t >= 0
We can use our ray to "ray trace" to various objects in the scene. Suppose, for example,
we want to ray trace to a sphere.
To do this, we first need to come up with a good definition
of a sphere shape. Since a sphere has a center location C and
a radius r, we can define a sphere using the following four values:
C_{x}, C_{y}, C_{z}, r
To ray trace to this sphere, we need to find what point
(if any) along the ray is on the surface of the sphere.
Points on the surface of the sphere are going to be those
points that are a distance r from the center of the sphere.
Recall, from our definition of ● product,
that the magnitude squared of any vector v is:
v_{x}^{2} + v_{y}^{2} + v_{z}^{2}
Using this fact, we can compute the distance squared from any point [x,y,z] to the sphere center by:
(xC_{x})^{2} + (yC_{y})^{2} + (zC_{z})^{2}
So if a point [x,y,z] is on the sphere, it must be true that:
(xC_{x})^{2} + (yC_{y})^{2} + (zC_{z})^{2} = r^{2}
Recall that any point along our ray can be described by:
[V_{x} + t W_{x}, V_{y} + t W_{y}, V_{z} + t W_{z}], where t >= 0
To make our math easier, let's shift coordinates,
so that the sphere is at the origin [0,0,0].
To do this, we just need to replace V by
P = V  C
This substitution moves the sphere to the origin.
Now our ray equation becomes:
[P_{x} + t W_{x}, P_{y} + t W_{y}, Pz + t W_{z}], where t >= 0
We can substitute this point into our equation for the
sphere (which is now at the origin) to get:
(P_{x} + t W_{x})^{2} + (P_{y} + t W_{y})^{2} + (Pz + t W_{z}) = r^{2}
Multiplying out the terms, we get:
t^{2} (W_{x} * W_{x} + W_{y} * W_{y} + W_{z} * W_{z}) +
t (2 * (W_{x} * P_{x} + W_{y} * P_{y} + W_{z} * Pz) ) +
(P_{x} * P_{x} + P_{y} * P_{y} + Pz * Pz  r^{2}) = 0
Using our definition of dot product, we can rewrite this as:
(W ● W) t^{2} + 2 (W ● P) t + (P ● P  r^{2}) = 0
But since W is unit length, (W ● W) is just 1. This further simplifies our equation to:
t^{2} + 2 (W ● P) t + (P ● P  r^{2}) = 0
We can now solve the standard quadratic equation,
t = (B + sqrt(B^{2}  4AC)) / 2A
where in our case:
A = 1
B = 2 (W ● P)
C = P ● P  r^{2}
to get:
t = (2(W ● P) ± sqrt(4(W ● P)^{2}  4(P ● Pr^{2}))) / 2
or:
t = (W●P) ± sqrt((W●P)^{2}  P●P + r^{2})
Equation 1
The above equation can have zero, one or two real solutions,
depending on whether the expression inside the square root
is negative, zero or positive, respectively.
Zero real solutions means that the ray has missed the sphere.
If there is a real solution but t is negative, that means
the sphere is behind the ray.
Otherwise,
one solution means the ray is just barely grazing the sphere, and
two solutions means the ray is going into the sphere at one point
and then exiting out at another point.
If you do find two positive roots, then you
want the smaller of the two roots, because
that is the one where the ray enters the sphere.
Once you find t, you can find the intersection point S on the sphere surface
by plugging t into the ray equation:
S = V + t W
Then in order to do lighting on the sphere, you can
find the normalized vector from the center of the
sphere to this surface point:
N = normalize(S  C)
To summarize the algorithm:

vec3 V = vec3(0, 0, f);

vec3 W = normalize(vec3(vPos.x, vPos.y, f));

vec3 P = V  C;
 Solve the quadratic equation above (Equation 1) to compute
t
 If first root
t
exists and is positive:

vec3 S = V + t * W;

vec3 N = normalize(S  C);
HOMEWORK
Due before class on Tuesday Feb 13.
Below is a slightly cleaned up version of the code
that I wrote at the end of class.
Notice that it defines a vec3
variable
N
which gives the direction
on a sphere.
For pixels outside the sphere,
N.z
is set to 1.0
.
For pixels within the sphere,
N
is used to do shading of the sphere.
Write a simple shader program that ray traces
to every pixel, using the algorithm described
above. You should be able to end up with
a value for
N
which you can test with
the code below.
uniform float uTime;
varying vec3 vPos;
vec3 sph(float x, float y) {
float zz = 1.  x * x  y * y;
float sign = step(0., zz);
float z = mix(1., sqrt(zz), sign);
return vec3(x, y, z);
}
void main() {
vec3 N = sph(2.*vPos.x, 2.*vPos.y);
vec3 L = vec3(.5*sin(5.*uTime),1.,1.);
float c = max(.1, dot(N, normalize(L)));
c = step(0., N.z) * .5 * c;
vec3 color = vec3(c, c*c, c*c*c);
gl_FragColor = vec4(sqrt(color), 1.0);
}