A perceptual capability finely honed in humans is the ability to perceive 3D shape from 2D shading information such as one might see in a photograph or painting. Traditionally, surfaces projected from 4D to 3D have been illuminated by 3D lighting models to generate the rendered shading. We can generalize this procedure and systematically compute the properties of shaded images of illuminated 4D objects in an attempt to recover some of this intuitive perception. The image of a 2D world is a projection to 1D film, 3D worlds project to 2D film, and 4D worlds project to 3D film, a volume filled with points of light. Volumes differentially reflect 4D light to give changing shades in the projected 3D volume image just as faces of a 3D polyhedron reflect 3D light to give different shades in the 2D image plane.
A common technique for viewing 1D curves in 3D graphics is ``tubing,'' which thickens each point on a curve by adding a disk; the boundary or outer skin of this solid fiber is a finite cylindrical surface that can now be rendered by standard methods; 3D shading and occlusion cues can now be computed directly. In [5], Hanson and Heng propose an analogous technique for 4D shading: thicken a surface embedded in 4D by adding a shiny circle at each point, illuminate with 4D light, depth buffer the projection to a 3D volume image, and volume render. The 4D depth buffer in principle produces precisely the same type of characteristic occlusion cues that 3D rendering produces for tubes. However, the full method using volume imaging, 4D occlusion calculations, and a final volume rendering step is very time consuming. In [4], Hanson and Cross introduce new techniques that are fast enough to use for interactive 4D visualization in virtual reality environments. In Figure 13, we compare the low-resolution, time-consuming, full volume image of the thickened surface to the fast approximation that computes a texture map for an ideal, infinitesimally thickened surface illuminated by 4D light and then projected from 4D to 3D.