1.4 Converging Approaches

Figure 1. A high-level element breakdown for live-action, mixed live action, animation and games shots. Each image is generated using a roughly consistent set of stages and processing steps. The succession of image states is as follows: 1) Camera image, in camera-native color space 2) Ungraded camera image or renderer output in an ungraded working color space 3) Image with generic grade 4) Final graded image for primary output with composited background and foreground elements and 5) graded image for an alternate output like PQ or DCDM. The terms used here are defined later in this document. Images are ? Geoff Boyle ? 2018 MARVEL, ? Disney/Pixar, ? Disney 2018. Imagery from Battlefield V courtesy of Electronic Arts Inc, ? 2018 Electronic Arts Inc. All rights reserved.

Since the first version of this document, there has been a convergence of approaches across cinematography, visual effects, animation, games, finishing, and grading. Shared challenges in the evolution of display and capture technology: a shift from photochemical to digital image capture, widely available wide-gamut high-dynamic range (HDR) displays, advances in rendering and image generation research and the increased integration of different production departments have driven this harmonization. Each production will define a slightly different overall color pipeline, but most discussion, debate, and variation boils down to choices made around the transforms and formats used. Image and color data with specific meaning, referring to scene or display intensities, and transforms, with specific expectations for input and output format, are concepts that are used remarkably consistently across on-set capture, visual effects, games, finishing, grading and software and hardware color processing pipelines.

Figure 1 above shows a consistent set of image states leading towards the final frame for live action, live-action with integrated visual effects, fully synthetic visual effects images, animated features, and games.