| BeanGuy Mouse Controls |
| Mouse | Action |
| LEFT BUTTON DRAG | Rotate View |
| RIGHT BUTTON OR ALT-LEFT BUTTON PRESS | Animated Selected Part |
| MIDDLE BUTTON OR CNTL-LEFT BUTTON PRESS | Highlight Selected Part |
|
BeanGuy Version 2.0
BeanGuy Version 2.0 is a simple applet to illustrate the work I did for my Fall 2002
Independent Study Project.
Background
Initally, I turned to Sun's Java3D API for my 3D rendering purposes. In addition to basic 3D rendering support,
Java3D appealed to me because it included support for loading 3D model files, which could be created using commercial
modeling packages such as 3DStudioTM or MayaTM, and
because it supported picking, the ability to select rendered objects
in 3D space from 2D mouse coordinates. I had planned to use these capabilities to create a customizable avatar
applet for the web. For BeanGuy Version 1.0, I was succesful in getting Java3D to load OBJ files, modeled in Maya. I
then wrote an applet which allowed the user to load a BeanGuy Avatar in variaous poses and then use picking to select
various parts which they could customize.
Unfortunately, Java3D had a few drawbacks that bothered me. The ability to actually use Java3D effectively in web pages
seemed overly complicated as it required the avarage internet browsing citizen to be up to date with recent browsers and/or
recent releases of Java. It also required a browser plug-in that is not included with standard Java releases. Finally,
because Java3D is not a standard component, I did not want to depend on its continued support.
Thus, for BeanGuy Version 2.0 I chose to customize another Java 3D rendering API, developed by Professor Ken Perlin at NYU
to support the Java3D capabilities I needed. These cabilities included 3D model file
loading and picking.
Loading 3D Models
Fortunately, another student had already written a utility called vrml2java. This utility produces a Java applet which displays the objects
defined in a given VRML file using Prof. Pelin's rendering API. For my purposes I needed something slightly different. I needed to produce a Java class
from a VRML file that could create the Geometry objects as before, but under one parent Geometry to be returned to an applet that had created
an instance of that class. So, I added an argument to vrml2java to request the production of just a geometryLoader versus a full applet. When
a user of vrml2java requests a geometryLoader a Java file is produced called <VRMLFileRootName>Loader.java. This Java file defines a
geometryLoader class with methods to support the loading and usage of the Geometry objects based upon those defined in the associated VRML file.
For animating purposes, I also keep track of Geometry names. For instance my BeanGuy model to the left has child parts
defined which were named when I modeled them in Maya. To animate the child parts I would rather not reload multiple key frames for the
entire BeanGuy. For instance, if I just wanted to animate the left leg, I would only want to load a new Geometry defining that leg in
a new position. Hence, naming code was added to vrml2java's geometryLoader to maintain part names.
Picking
Picking allows users to interact with 3D models more intuitively. To get an idea of what I mean, first use the keyboard to make BeanGuy go through
some of his animations. Then use the right mouse button to click on animated parts, (if you don't have a right mouse button, click the main mouse
button while holding down the ALT key). Clicking on BeanGuy is more interactive. For a second example of the benefits of picking, click the
middle mouse button on any part of BeanGuy to highlight that part (many mouses and/or browsers do not support the middle mouse button, so hold down CTRL
while clicking the main mouse button if necessary). Picking is the most intuitive way for a user to affect something he or she can see. Imagine trying
to list BeanGuy's part names and requiring the user to highlight the parts by selecting the associated name.
To implement the picking ability I initally planned to project a vector from the 2D mouse coordinates into the 3D space, checking for any Geometry object
intersections. About halfway through the implementation of my vector idea, I found a much better way to implement picking. However, I expended a decent amount
of effort on adding code to the Geometry class to maintain bounding box information and to display bounding boxes for debugging purposes. The code remains
because bounding boxes would faciliate the implementation of things like collision detection. Try toggling the display of bounding boxes for BeanGuy's non-pickable
Geometries. Unfortunately, the pickable parts of BeanGuy also happen to be parts that have been replaced by AnimatedGeometry objects and thus the bounding boxes
defined for these parts, when the base BeanGuy model was loaded, are gone. (Bounding boxes for the legs and feet will appear if the walking animation has not yet
been run. To see those after running the walking animation you can reload the page.)
The better idea for picking is much less complicated than vector intersection checking and is close to flawless in it's ability to detect
the exact Geometry object upon which the user clicked. I added a new method, to the Renderer class, called
setMouseLocation(), that allows the user to specify an (x,y) mouse coordinate to check in subsequent rendering passes.
In each rendering pass, I set a currentGeometry pointer to the current Geometry being rendered. Then, as the rendering loop is calculcating
pixel values for that Geometry, I check if the current pixel being
rendered equals the current mousePixel and, if so, I set a global mouseGeometry pointer to the currentGeometry pointer. Then another method, called getMouseGeometry(),
can be called to retrieve the Geometry object corresponding to the last mouse location provided. Simple, but effective.
Presence
If you haven't already, try toggling BeanGuy's presence. Wait a second
between toggles to really see what happens. Presence is the combination of blinking and noise applied to the rotation of
BeanGuy's facial features, i.e. eyes, nose, and pupils,
and BeanGuy's vertical arm postitioning (meant to look sort of like breathing). BeanGuy's "head" isn't really
suited for rotation about its X axis, so noise is only applied to its Y axis roation for the eyes and nose. The pupils,
however rotate about both X and Y within the eyes. This use of noise to establish presence is based on Ken Perlin's
"The Clay Becomes Flesh" and "Responsive Face" experiments.
Animation
By now, you've already seen BeanGuy going through some animations. Animation was accomplished using key frames and interpolation. To implement it
I added a new AnimatedGeometry class which extends the Gometry class. AnimatedGeometry works like an API as it hides all of the animation cycling
and interpolation details and requires that the user only provide the key frame geometries. For a better understanding, please take a look at the
source code links provided. BeanGuy.java's initialize code shows how easily AnimatedGeometry can be used as an API.
|