3DMark Solar Bay uses a cross-platform graphics engine, based on the engine used in 3DMark Speed Way. It has been optimized for lightweight PCs and high-end Android devices. The engine was developed in-house with input from members of the UL Benchmark Development Program.

The rendering, including scene update, visibility evaluation and command recording is done with multiple CPU threads using one thread per available logical CPU core. The purpose is to reduce the CPU load by utilizing multiple cores.

What is ray tracing?

Ray tracing is the showcase technology for Solar Bay, and the technique is used in the test for simulating real-time reflections. Compared to traditional rasterization, ray-traced scenes produce far more realistic lighting. While not a new technology, it’s only in recent years consumer GPUs and Android devices have been capable of running real-time ray-traced games at frame rates acceptable to gamers.


Graphics Features.

Ray tracing

Ray tracing is used to handle reflections for the reflective panels, and the floor at the beginning of the benchmark. As the benchmark progresses and more panels are added, the ray tracing workload increases over three sections, ending at a similar workload level to 3DMark Port Royal.  

For ray tracing, Solar Bay uses the ray query mode on devices running Android, and the pipeline mode for devices running Windows. While these are different modes, they both represent best-implementations for raytracing on their respective platforms, and Solar Bay scores are still comparable across Android and Windows devices. 

Particles

Particles are simulated on the GPU using compute shaders. The particles are self-illuminated. The particles are rendered at the same time with transparent geometries using the same order-independent technique.

Geometry rendering

Opaque objects are rendered using a deferred rendering method in the graphics pipeline using temporary G-Buffer targets for PBR material parameters. The shading is done using the clustered light information in linear HDR color space utilizing temporary G-Buffer data. In addition to the final lighting result, the deferred rendering pass outputs depth information for other subsequent rendering effects.

Non-ray traced reflections (non-mirror-like surfaces) are based on localized environment cube maps that are blended together during sampling based on their effective volume.

Transparent objects are rendered using the “Weighted Blended Order-Independent Transparency” technique by McGuire and Bavoil. The technique requires only two temporary render targets to achieve a good approximation of real transparency in the scene. The result of the transparent objects pass is blended on top of the final surface illumination.

All rendering is done using primary command buffers. Geometry shaders and tessellation are not used in this benchmark.

Post Processing

TAA

Motion vectors are computed in fragment shader during g-buffer rendering. Two history textures are used to store data from previous frames, depth and illumination, and an exponential moving average with variance clipping is used to blend the data of the current frame. Depth texture is linearized in a separate pass for the blending to work correctly. Motion vectors from the current frame are used as an offset for sampling the history textures in the resolve pass. This pass is done for the final illumination texture and linearized depth as the first post-processing pass, before bloom and depth of field. 

XeGTAO

This is an ambient occlusion technique by Intel suitable for low-power devices. you can read more about this technique at its GitHub page.

Bloom

Bloom is used for the blur, streaks, anamorphic flare and lenticular halo effects. It’s based on a compute shader FFT that evaluates several effects with one filter kernel and utilizes workgroup shared memory. Bloom is computed in reduced resolution to make it faster.

Volumetric lighting

Volume illumination is computed by approximating the light scattered towards the viewer by the medium between the eye and the visible surface on each lit pixel. The approximation is based on volume ray casting and a simple scattering and attenuation model. One ray is cast on each lit pixel for each light. The cast ray is sampled at several depth levels. The achieved result is blurred before combining the volume illumination with the surface illumination.

This implementation is the same as in 3DMark Wild Life.

Depth of field

The depth of field effect is computed by filtering rendered illumination in half resolution with three separable skewed box filters that form a hexagonal bokeh pattern when combined. The filtering is performed in two passes that exploit similarities in the three filters to avoid duplicate work.

The first pass renders to two render targets. The second pass renders to one target combining the results of the three filters. Before filtering, a circle of confusion radius is evaluated for each pixel and the illumination is premultiplied with the radius.

After filtering, illumination is reconstructed by dividing the result with the radius. This makes the filter gather out of focus illumination and prevents it from bleeding in focus illumination to neighbor pixels.

This implementation is the same as in 3DMark Wild Life.