Course notes for October 30

Modifying text in your html file:

If your web page has a named span element, such as:

   <span id=foo>this is my span element</span>
then you can set the text of that span via the following code in your script:
   document.getElementById(id).firstChild.nodeValue = str;
where 'id' in the above case would be "foo", and 'str' is whatever you would like the new text to be.

I've found it convenient to implement this as a function:

   function setSpan(id, str) {
      document.getElementById(id).firstChild.nodeValue = str;

Using this technique, you can empower your WebGL interactions to make changes to the text of the surrounding document, which will allow you to do things such as having your document text correctly state the current angle or length of something in an accompanying interactive figure.

Setting white background color for your scene:

You can set the background color of your WebGL canvas to be anything you want. In particular, if you want your figure to "blend in" to the page, so that it looks like a diagram in your document, you might went to set it to have the same background color as the rest of your page (eg: white).

To do this, in gl.js you can change the line:

   gl.clearColor(0.0, 0.0, 0.0, 1.0);
   gl.clearColor(1.0, 1.0, 1.0, 1.0);

Improved noise in JavaScript:

As I mentioned in class, I've ported my Improved Noise algorithm to JavaScript, as the file inoise.js.

You can use it to model shapes in various ways. For example, a bumpy spheroid might be implemented like this:

    var sph = function(u,v) {
       var theta = 2 * Math.PI * u,
           phi = Math.PI * (v - .5),
           cosT = Math.cos(theta) , cosP = Math.cos(phi) ,
           sinT = Math.sin(theta) , sinP = Math.sin(phi) ;
       var x = cosT * cosP, y = sinT * cosP, z = sinP;
       var r = 1 + noise(2*x, 2*y, 2*z) / 12
                 + noise(4*x, 4*y, 4*z) / 24;
       return [ r * x, r * y, 1.3 * r * z ];

Feel free to use this function in your geometric modeling.

Triangle strips:

As I explained in class, triangle strips allow you to keep the transfer of geometry data from your CPU to your GPU down to rougly one vertex per triangle. In the version of gl.js I provided last week, g.TRIANGLE_STRIP is already enabled, so all you need to do is add vertices to your vertices array in the proper order.
As I said in class, a way to do this with a general M×N mesh it to scan through rows (0 ≤ n ≤ N) in the outer loop, and by columns (0 ≤ m ≤ M) in the inner loop, scanning from left to right for even rows, then back from right to the left for odd rows. At each step of the inner loop, append the two vertices [m,n] and [m,n+1] to the vertices array.

I realized after class that in order to avoid spurious triangles between rows, you can repeat the last vertex two more times in each row. This will create a degenerate triangle, which is harmless.

To the right is an example of such an ordering.

Texture mapping:

In order to load textures while working on your local computer, you need to make your browser think of your local file system as a server source. You can do that via this shell script, which was graciously provided by Francisca:

   python -m SimpleHTTPServer &
   open -a /Applications/* http://localhost:8000
In order to do texture mapping, you need to include the following code in your drawObject() function, after the line gl.bindBuffer(gl.ARRAY_BUFFER, vBuffer);:
   if (! (obj.textureSrc === undefined))
      if (obj.texture === undefined) {
         var image = new Image();
         image.onload = function() {
            var gl =;
            gl.bindTexture   (gl.TEXTURE_2D, this.obj.texture = gl.createTexture());
            gl.texImage2D    (gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, this);
            gl.texParameteri (gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR);
            gl.texParameteri (gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR_MIPMAP_NEAREST);
            gl.bindTexture(gl.TEXTURE_2D, null);
         };  = gl;
         image.obj = obj;
         image.src = obj.textureSrc;
      else {
         gl.bindTexture(gl.TEXTURE_2D, obj.texture);
         gl.uniform1i(gl.getUniformLocation(sProgram, "uSampler"), 0);

This will then allow you to define a shader that contains a texture sampler, such as:

<script id=fs_wood type=x-shader/x-fragment>
   uniform sampler2D uSampler;
   uniform vec3 uLDir;
   varying vec3 vNormal;
   varying vec2 vUV;
   void main(void) {
      float d = .1 + .9 * max(0., dot(uLDir, normalize(vNormal)));
      vec3 rgb = vec3(d,d,d);
      vec3 trgb = ungammaCorrect(texture2D(uSampler, vUV).xyz);
      rgb = rgb * trgb;
      gl_FragColor = vec4(gammaCorrect(rgb), 1.);
where the ungammaCorrect and gammaCorrect functions can be defined, respectively, as:
   vec3 ungammaCorrect(vec3 c) { return vec3(pow(c.x,2.222),pow(c.y,2.222),pow(c.z,2.222)); }
   vec3 gammaCorrect(vec3 c) { return vec3(pow(c.x,.45),pow(c.y,.45),pow(c.z,.45)); }
Note in the above how we needed to un-gamma-correct the loaded texture image before combining it linearly with the other shading factors. Then when everything was all done, we gamma-corrected the final result

Now all you need to do, when you define an object that contains a texture sampler, you just need to set its textureSrc field to the name of the texture file, such as:

   objects[0].textureSrc = "wood_floor.jpg";

Bump mapping:

For fine perturbations of a surface, it can be very expensive to generate and render the large number of triangles required for true surface normal perturbation. Therefore for finely detailed bumps, we sometimes just use Bump Mapping, a technique first described by Jim Blinn about 40 years ago.

The basic idea is to modulate the surface normal, within the fragment shader, to reflect the changes in surface direction that would be produced by an actual bumpy surface. Since the human eye is more sensitive to variations in shading than to variations in object silhouette, this technique can produce a fairly convincing approximation to the appearance of surface bumpiness, at a fraction of the computational cost of building finely detailed geometric models.

To do bump mapping of a procedural texture T that is defined over the (x,y,z) domain (the noise function is an example of one such procedural texture), we can think of the value of T(x,y,z) as a variation in surface height. In order to simulate the hills and valleys of this bumpy surface, we subtract the derivative of T from the normal (because the normal will point toward the valleys), and then renormalize to restore the normal vector to unit length.

We can approximate the vector valued derivative at surface point (x,y,z) by finite differences (where ε below is some very small positive number):

p0 = T(x,y,z)

px = (T(x+ε, y, z) - p0) / ε
py = (T(x, y+ε, z) - p0) / ε
pz = (T(x, y, z+ε) - p0) / ε

which we can then use to modify the surface normal:
normal ← normalize( normal - vec3(px,py,pz) )

Advanced topic: Bump mapping from a texture image is more tricky, since you need to know the direction to move the surface normal with respect to changes in the u and v directions of the texture, respectively. Note that in the implementation of function createParametric(), I already compute exactly those quantities for each vertex, as ux,uy,uz and vx,vy,vz, respectively. These two direction vectors, which are both tangential to the surface, are sometimes called the tangent and bitangent vectors.

In order to do bump mapping from a texture image, you will need to include the tangent and bitangent vectors as part of your vertex data for each vertex. This will increase the data size in each element of the vertices array from 10 numbers to 16 numbers. Only try this if you are very confident, and you think you understand the interface to WebGL.


Implement as many of the topics we covered in class as you would like. As usual, make something cool and fun and interesting. Surprise me if you can. :-)

Definitely implement both proper triangle strips and texture mapping, since those will be important for the assignments that follow in later weeks.