I'm an independent contractor. Clients choose me when they find that my expertise - and demeanor! - form a good match to their needs. Obviously, I work on projects of importance to my clients. If I were independently wealthy, left to my own devices, I would work on these projects:
- Compare CIE CMFs and cone fundamentals. Color in the real world is best described in terms of distribution of power across the spectrum of visible light. Human vision maps these power distributions into sensory values, then processes these signals at successively higher and higher levels. The famous CIE color matching functions define the mapping from spectral power distributions (SPDs) to tristimulus values; these values are then the basis of color systems used for measurement and image coding. However, psychophysical data has now revealed the "cone fundamentals" that are taken to be the raw spectral sensitivities of human vision. The cone fundamentals don't quite match the CIE CMFs. I'd like to perform a mathematical analysis of the relationship between the two sets, then correlate this analysis to visual performance. This task is highly relevant to image coding: Emerging color appearance models have elements that operate in (estimated) cone fundamental space.
- Comparison of color image codes. In image coding it is important to relate code values to colors in such a way that image signal values (e.g., R'G'B', Y'CBCR, L*a*b* etc.) are fairly perceptually uniform - that is, that Euclidean distances in code value space produce approximately perceptually equal steps in perceived colors. As far as I can tell, no exhaustive research has been done to characterize the perceptual performance of existing well-established color coding schemes. For example, no one has measured delta-E values of neighboring Y'CBCR codes in 8-bit digital video. I'd like to compare the performance of various spaces, with the goal of either establishing some existing space as near perceptually optimum, or contributing to the development of a new space that is. I believe that image compression will continue to be necessary for the distribution of motion images well into the future. So, any color space used to code images for distribution will have to be designed with compression in mind; in particular, I think chroma subsampling will continue to be necessary. I would like to study the behavior of current and emerging color spaces with respect to their chroma subsampling behavior, and I would like to assess the desirability or undesirability of adhering to the constant-luminance "principle".
- Color camera and scanner design. Existing, practical cameras and scanners have spectral responses that don't closely resemble either the cone fundamentals or transformations of the CIE CMFs. Therefore, these cameras see some colors differently from the way that vision sees those colors. I'd like to analyze the departures and characterize their perceptual impact. This would allow us to characterize the difference among images generated by different HDTV cameras, such as those from Sony and Thomson. In the long term, I'd like to establish the parameters of optimum spectral sensitivity curves, taking into account the sensitivity, color difference detection, and noise tolerance properties of vision.
- Color for digital cinema. Following up from the previous project, I'd like to characterize the color of motion picture film as recorded and eventually projected. I'd like to discuss with accomplished cinematographers what aspects of film image reproduction are desirable in the long term, and what aspects are undesirable artifacts of the current optochemical processing. I'd like to compare the spectral and tone reproduction characteristics of film to to what could be obtained digitally. I'd like to design the optical and electronic signal processing elements that are required to make an HDTV/D-cinema camera look like film.
- Video. Much as I sometimes feel that I've accomplished pretty much all that I can do in the domain of digital video - what with the 736-page book, and all. However, there are a few loose ends that I'd like to tie up. It would be great to be able to design and publish C source code and the matching VHDL code that implements the classic video algorithms of my book, so as to provide reference designs.
- Bring accurate color to Mathematica and MATLAB. These systems all implement color primitives; they are all useful for developing color image processing algorithms. However, all of them document RGB and so-called YIQ incorrectly (neghlecting the nonlinearity imposed by gamma correction, or putting it another way, failing to recognize that RGB codes are related to intensity through the 2.5-power function of the monitor). None but MATLAB enables access to color management technology by the user of the system. And finally, none of them provides end-to-end use of color management technology to produce consistent color from their image and graphics primitives. I'd like to remedy all of that.
- Get Foley, van Dam, Feiner, and Hughes to issue an Erratum to Computer Graphics Principles and Practice to correct their grossly inaccurate description of video "YUV" color space. (Meanwhile, see
luminance considered harmful.)