Hyperspectral/Lidar

Historically we have conducted imaging assessment tests empirically, using resources such as the Esmeralda stage, the split screen Macbeth, the condenser set with the Rainbow glass, etc. Images more visually representative of production footage require a different approach. At one time, we contemplated creating “standing sets” on Stella that would be maintained in pristine condition so that valid comparisons could be made over time. We also contemplated that the dioramas at the Natural History Museum could serve such a purpose as they are maintained in pristine condition unchanged over many years.

Our recent experience on the Solid State Lighting project with the computer modeling developed by Scott Dyer, including the simulated split-screen Macbeth, suggests still another approach. If we can replicate “real world scenes” with sufficient accuracy in the virtual domain, then a great deal of our analytical needs could be met by computer simulation.

As our computer modeling tools improve, it will be possible to assess photographic imaging processes from a variety of perspectives including lens performance, detector performance and colour rendering, to mention just a few. To accomplish this however we will need to create a catalogue of scenes and subjects that provide imaging challenges.

These scenes, to be useful as analytical tools, will have to be comprised of greater resolution, dynamic range and wider colour gamut than can be recorded by ‘conventional’ devices. In short, they must objectively ‘emulate’ real world scenes rather than subjectively ‘simulate’ them as conventional recording does.

The means to do this may be provided by the use of hyperspectral lidar imaging. This hybrid technology combines lidar (Light Detection and Ranging) with hyperspectral imaging (or the variant “full spectral imaging.” In either case, the object is to acquire the complete spectral data for each pixel in an image as opposed to the common RGB values.) This would produce three dimensional images in great and accurate detail with regard to both shape, surface texture and colour.

This is analogous to having a complete virtual scene rather than a conventional graphic depiction of a scene. With some effort it should even be possible to model moving objects so that motion blur phenomena could be included in the analysis. Ultimately, we could perhaps attain a ‘virtual’ Esmeralda.