Next: Conclusions Up: Example Applications Previous: Monte Carlo Path

RADIANCE

Radiance is a rendering system developed over many years at the Lawrence Berkeley Laboratory. It incorporates most kinds of light transport in a physically-based simulation of architectural (and other) environments, using a hybrid Monte Carlo and deterministic ray tracing approach that has been optimized to provide accurate results quickly in most cases [14]. In general, Radiance produces high quality results in much less time than the MCPT method just described.

``Noise'' pixels are less common in Radiance, but they may still occur because of the Monte Carlo components of the calculation. However, because of the deterministic components of the calculation, an explicit measure of Sp is not available for each input sample. Alternative rules for identifying ``noise'' pixels and their region of influence are needed.

In Radiance, an initial floating-point picture is generated at the super-sample resolution (typically 3x3 times the final image resolution). Anti-aliasing and other filtering operations are carried out by a separate program. Since we do not have an estimate of the variance of each sample, we define ``noisy'' samples as pixel super-sample values that are very large (or very small) compared to their neighbors. We increase the radius of influence for these samples. The criterion for how much to spread a sample is simply stated:

Any given super-sample is spread out sufficiently that its influence on any given output pixel is smaller than a specified tolerance.

A ``noisy'' super-sample is identified as having a greater influence on an output pixel than we can tolerate with the current filter kernel. The amount a given sample affects a weighted average of samples is derived very easily from the formula for weighted averages:

The average without sample xk is simply:

Taking the absolute difference between xbar and xbar,k-1 and dividing by xbar,k-1, we arrive at the absolute relative difference due to a super-sample's influence:

Our goal is to find a kernel width that produces a Dar less than or equal to the selected tolerance. In the context of a filter kernel whose weights sum to one, this translates to the following formula:

-

where:

xo is the super-sample value

wo is the super-sample weight at the central peak of the filter kernel

xbar> is the kernel-weighted average centered on this super-sample

If the effect of a sample is above our tolerance level using the default kernel radius, the radius is expanded until the sample's effect is below tolerance. The tolerance given depends on the expected pixel variance, which is in turn related to the number of super-samples used for each pixel. From experience using a Gaussian kernel, a good tolerance value for 3x3 oversampling is 0.25, and a good tolerance for 4x4 oversampling is 0.15. These tolerance values are typically higher than the Ltvis value used for the previously described box filter because the influence of a Gaussian kernel always peaks near the closest output pixel, and drops off rapidly with distance.

The search for a kernel width to satisfy Relation 14 without going too far overboard could be expensive, so we have made a few optimizations. First, we work with pixel luminance values rather than colors, which reduces the number of operations and avoids the color shifting mentioned in the previous section. Second, we compute ring sums about the closest output pixel and use 1-dimensional vector multiplication to compute the different Gaussian-weighted averages, greatly simplifying the calculation of xbar. Finally, we use numerical iteration to zero in on the appropriate kernel width more quickly. Treating 14 as an equation, we guess the next kernel width based on the Dar computed for the current width. In most cases, this produces faster convergence than a simple binary search, but care must be taken to avoid infinite iteration on anomalous pixels. In our implementation of this filter, we have found it to take about three times as long as a standard Gaussian kernel, which is still insignificant compared with the overall rendering times.

Examples of applying a filter of this form are shown in Figs. 5 and 6 . The examples demonstrate two types of high variance areas that can be encountered in Radiance renderings.

- Figure 5

As discussed in Section 4.1, light source boundaries (diagrammed in Fig. 1(a)) may cause aliasing in the final image even when many samples are taken at a pixel, because just one sample landing on or off the light source makes a detectable contrast difference in the final result. As mentioned in Section 4.2 the usual solution of clamping before filtering produces incorrect results. It also destroys the physical units of the result. By applying an energy preserving non-linear filter before mapping to the display device, extreme contrast boundaries are spread out and aliasing is reduced. The effect is a slight fuzziness to light source boundaries in proportion to their brightness, something that in appearance is quite natural because the eye loses acuity in these regions, anyway.

The image on the left of Fig. 5 shows a low-resolution closeup of a pendant fixture, filtered with a linear Gaussian kernel and 9 samples/pixel. Notice the jagged edges caused by inadequate sampling. The image on the right shows the same computation with an energy preserving non-linear filter applied during anti-aliasing. The source boundaries are now softer and smoother, as they would appear in real life. The results have not been changed, only dispersed slightly around the source edges. This is important for later analyses, which might need the absolute radiance values to evaluate glare or other visual quality metrics.

- Figure 6

To avoid unpredictably long rendering times, Radiance uses a minimal number of shadow rays to light sources plus one specular ray per pixel super-sample per surface interaction, similar to Monte Carlo path tracing. The user chooses an initial sampling density that produces adequate convergence over most of the image, but in areas where the number of samples chosen is not enough, there will be noise. The most frequent source of objectionable sampling noise is rough specular reflection of light sources (diagrammed in Fig. 1(c).) The left image in Fig. 6 shows a rendering of a candle holder with a rough specular surface on a table using 16 samples per pixel and a linear Gaussian filter. In this case, a linear Gaussian filter compounds the sampling artifacts by spreading them out to neighboring pixels without sufficiently reducing their contribution. So, little bright spots become big bright spots. The right image demonstrates how a non-linear filter reduces image noise without compromising the results. The specular highlights that were present in the calculation are still present in the filtered image - only more evenly distributed. This can be compared with alpha trimmed filters that reduce noise simply by removing the offending samples, taking the very real highlight with them. The energy preserving quality of our filter ensures that we do not lose the information we have worked so hard to compute.



Next: Conclusions Up: Example Applications Previous: Monte Carlo Path


holly@cam.nist.gov, gjward@lbl.gov