Volume Rendering Pipeline

Reconstructing the original signal

Interpolation Classification Transfer Functions Shading Gradient Estimation Central Differences Operator Compositing

At a Glance

Volume Rendering Pipeline


During rendering an application will very rarely access the volume data at even grid points. Most of the time samples are taken which are located in between the voxels and the respective sample value has to be interpolated. During data acquisition a continuous signal is sampled at regular intervals and stored as a discrete voxel grid. Therefore a volume dataset represents the original signal in a discretized form.

When sampling the data set at a certain position the original signal should be reconstructed as accurately as possible. Samples taken from in between voxels are interpolated and this operation serves as reconstruction of the original signal.

The quality of interpolation is of great importance for the resulting image. There are many interpolation techniques, such as linear interpolations or cubic B-spline interpolation, that are relevant for volume rendering.

Alternative text


A single voxel within a regular volume represents only intensity values but contains no information on the appearance of a certain element, such as color or transparency. This is because during data acquisition usually only the signal intensity is stored without further information on the appearance of a voxel.

Therefore the optical material properties are not de ned and must be adjusted manually. This process of mapping intensity values to visual rendering parameters such as opacity and color is known as voxel classification. Structures inside a dataset can be highlighted or completely hidden. For example voxels with a high intensity may belong to bones and can be rendered in white, whereas softer tissue such as fat or skin can be made transparent as it has a much lower intensity. In this way certain areas of interest can be selected based on the voxel intensity.

Transfer Functions

Such a mapping of intensity values to transparency is an example of a one-dimensional classi cation which is based only on scalar values since only intensity ranges are taken into consideration. The mapping that describes how certain intensity values are rendered is known as transfer function.

These are generally implemented using some form of lookup table that contains optical attributes for all available intensity values. A basic opacity mapping can be encoded in a simple one-dimensional texture, however transfer functions can be two- or theoretically n-dimensional.

Even though one-dimensional transfer functions are the most widely used, they do have certain severe limitations. To overcome this, additional dimensions can be integrated into the transfer function domain leading to more exibility during classification.

The occlusion spectrum, which is a cross plot between signal intensity and respective occlusion, serves as a two-dimensional transfer function. In addition to the intensity, the occlusion values serve as a second classi cation dimension. By selecting appropriate regions inside the spectrum, structures can be separated - an operation that might not be possible using simple one-dimensional transfer functions.


Shading of a voxel is performed after it has been assigned color and opacity values. The final color of a voxel however is further dependent on parameters such as view direction, position and color of available light sources. There are certain models that describe how these parameters in uence the appearance of a certain point.

These models are known as local illumination models since they describe illumination of single surface points without taking global e ects such as its ambient surroundings into consideration. Local illumination models calculate the color for a particular surface based on the normal vector that describesits spatial orientation. However in order to adapt local illumination techniques to volumetric datasets another metric is needed as they do not contain normals for its data elements.

Gradient Estimation

A good approximation for surface normals can be achieved by using the gradient of volume dataset which is defined as the vector that is perpendicular to the isosurface through that point. There are different approaches for calculating the gradient, including preprocessing and runtime techniques.

Precomputed gradients generally result in high memory consumption as the gradients must be calculated and stored per voxel. An additional volume is uploaded to the GPU and during rendering gradients are sampled from this separate texture. Since GPU memory is mostly a scarce resource, this may be an unacceptable limitation.

Another approach is to generate gradients during runtime. This so-called on-the-fly gradient estimation can be implemented using one of many available gradient operators. The most common operator is the central differences operator.

The central differences operator approximates the directional derivatives in real time by taking six additional samples from neighboring voxels. The secant from the previous to the next voxel is then used as approximation for the gradient.

Alternative text

One important difference between precomputed and runtime methods is the way gradients are saved and sampled. Precomputed gradients are stored at regular grid coordinates and are interpolated trilinearly during runtime whereas runtime gradients are calculated on a per pixel basis inside the fragment shader.

Since modern hardware has reached a performance level that is sufficient for runtime evaluation of gradients, this approach is generally preferred nowadays as it also results in higher image quality.

Gradient-based shading is used best for volume datasets that have distinct and clear boundaries between material layers. Based on the gradient all common local illumination techniques can be applied. However, gradient-based techniques are very sensitive to highfrequency noise.


The basis for calculating the discretized volume rendering integral is the compositing scheme, which describes how samples that are taken along a ray are blended together to build the nal pixel color. Although there are several di erent schemes for blending samples together, they all can be classified as either front-to-back or back-to-front traversal schemes depending on the direction of the viewing rays.

The most common technique are front-to-back blending methods. Rays are cast from the viewing point into the volume and samples are taken at discrete positions along the ray. Both approaches come with several benefits and drawbacks, so the traversal order should be selected depending on the application.

One big advantage of front-to-back traversal is, however, early ray termination. The sampling of rays that have accumulated to full opacity can be canceled, because all portions of a volume that are located behind will be completely occluded, because the pixel is already totally opaque. In addition to these two compositing schemes that are based on direction there are other methods such as maximum intensity projection (MIP) or accumulation methods ( rst, last, average etc.).