You are here
Designing Effective Transfer Functions for Volume Rendering from Photographic Volumes.
Photographic volumes present a unique, interesting challenge for volume rendering. In photographic volumes, voxel color is predetermined, making color selection through transfer functions unnecessary. However, photographic data does not contain a clear mapping from the multivalued color values to a scalar density or opacity, making projection and compositing much more difficult than with traditional volumes. Moreover, because of the nonlinear nature of color spaces, there is no meaningful norm for the multivalued voxels. Thus, the individual color channels of photographic data must be treated as incomparable data tuples rather than as vector values. Traditional differential geometric tools, such as intensity gradients, density, and Laplacians, are distorted by the nonlinear nonorthonormal color spaces that are the domain of the voxel values. We have developed different techniques for managing these issues while directly rendering volumes from photographic data. We present and justify the normalization of color values by mapping RGB values to the CIE L*u*v* color space. We explore and compare different opacity transfer functions that map three channel color values to opacity. We apply these many-to-one mappings to the original RGB values as well as to the voxels after conversion to L*u*v* space. Direct rendering using transfer functions allows us to explore photographic volumes without having to commit to an a priori segmentation that might mask fine variations of interest. We empirically compare the combined effects of each of the two color spaces with our opacity transfer functions using source data from the Visible Human Project.