Yesterday, Feb. 22nd, 2012 the image sensor session took place at the ISSCC. Several very interesting papers were presented. For a couple of subjects, there were two different papers presented. That gives the audience the opportunity to compare two techniques with their pros and cons. Well done by the organizing committee.
There were two papers, both from Samsung, dealing with the simultaneous capturing depth by means of Time-of-Flight sensors. But new is the possibility to capture normal video (called RGB) and depth (called Z) information simultaneously. Simultaneous basically means with the same sensor.
The first solution captures RGB and Z at the same time. The device has an image field that was composed out of two types of lines, lines sensitive and optimized for RGB and lines sensitive and optimized for Z. So for every two lines of RGB there was one line of Z. The two RGB lines were provided with the classical Bayer pattern, the Z line has no filter at all. To provide the Z pixel with extra sensitivity, the width of a single Z pixels was equal to the width of 4 RGB pixels.
The pixels not only differ in size, but also in architecture. The RGB pixels had an extra potential barrier in the silicon and underneath the pixels. This barrier was not present underneath the Z pixels basically to extend the near-IR sensitivity, because it is the near-IR signal that is used for sensing the depth information. It was not really clear from the paper whether there was any effort made to protect the RGB pixels from the incoming near-IR light, but in the Q&A the presenter referred to future work to put extra near-IR filters on top of the RGB pixels.
A second solution did not capture the RGB and Z at the same time, but in a sequential way with the same sensor, for instance the odd frames giving RGB and the even frames giving Z information. The RGB pixels were organized in a 2×4 shared architecture and provided with the standard Bayer pattern. In the case these pixels were used in the Z mode, a 4×4 binning was done (combination of the charge domain and analog domain) to increase the sensitivity of the Z pixels. Innovative in this design was the location and sharing of the floating diffusions. Every single RGB pixel has two floating diffusions (one left and one right of the pinned photodiode) that could be tied together with the floating diffusions of the neighbouring pixels (a kind of back-to-back architecture). Also at the end of this paper, measurement results and images were shown, both of the RGB and Z results. During Q&A the presenter mentioned that the RGB images shown were taken with a near-IR filter in front of the sensor and that in the Z-case the filter was removed.
So, two different sensors with different architectures were presented for the same application. It was clear that in both situations there is still work to do to improve the performance, but nevertheless the two papers gave a clear indication in which direction Samsung (in this case) is seeking after new applications.
More to come !
Albert, 23-02-2012.