Archive for February, 2012

Report ISSCC 2012 (4)

Tuesday, February 28th, 2012

Another nice pair of papers came from NHK and Samsung.  Especially their ADC attracted my attention : both papers made use of what I call myself a tandem-ADC.  This is an ADC that is build around two different architectures or two different working principles.  Last year’s ISSCC had already such a device in a Sony sensor, in which the column ADC was split into two parts : one with counter and one without counter if I remember well.

This time, NHK presented a very similar paper as the one presented at the International Image Sensor Workshop.  It is a 33 Mpixel UHDTV sensor with pixels of 2.8 um x 2.8 um, that is capable of operating at 120 fr/s.  The ADC used was split into two parts, both are cyclic ADCs, delivering in total 12 bits.  The first 4 upper most bits are converted in a first cyclic ADC (based on 3 cycles), the last 8 lower most bits are converted in a second cyclic ADC (based on 8 cycles).  These two cyclic ADCs operate in a pipelined organization, in this way extra speed can be gained.

The second presentation, Samsung’s, discussed a 24 Mpixel APS-C size imager with 3.9 um x 3.9 um pixel size.  The on-chip ADC has a resolution of 14 bits in a full range of 1.7 V.  The circuit realizing the first 2 … 6 bits, in combination with the CDS, is based on a delta-sigma converter.  The remaining 8 bits are converted in a cyclic ADC.  But actually the beauty of this construction is the fact that part of the delta-sigma and the cyclic ADC use the same building blocks.  And because the two parts work in series, several building blocks of the delta-sigma are used as well in cyclic ADC.  In this way the circuitry needed to realize the complete ADC remains relative small.  Cleaver idea !

Talking about ADCs : Delft University of Technology presented a paper on a column-level ADC capable of doing multiple sampling without any increase of hardware.  The ADC is based on an up-counter with BWI (Bit-Wise-Inversion) to allow digital CDS.  In case multiple sampling is applied, the counters simply continue the counting for several consecutive samples.  Without any special pixel design, the multiple sampling (in combination with an extra column amplifier) resulted in a noise level of only 0.7 electrons.  The low conversion gain of the pixel (< 50 uV/e-) clearly indicated further room for improvement.  Pixel noise levels of 0.28 electrons fabricated in “standard” CIS processes are needed for single-electron/photon detection.  This performance level is coming closer !

More to come ? Maybe !

Albert, 28-02-2012.


Report ISSCC 2012 (3)

Monday, February 27th, 2012

As I mentioned already earlier, there were a few of “duo” presentations at the ISSCC.  A second pair of papers that pretty nicely went together were two papers on global shutter sensors.

The first one came from Sony, in which a 10 Mpixel device was described.  The novelty of the device was indeed the global shutter, based on a dual storage node in the pixel.  As know the floating diffusion is not really a dark-current-friendly storage node, neither CDS-friendly.  For that reason an extra in-pixel capacitor can be used between the transfer gate and the floating diffusion.  This idea is not new, but in the Sony paper, this extra storage node is relatively small, so it will not occupy that much space.  The extra storage node is only used for very small charge packets and the readout can be operated in the CDS mode.  For larger charge packets, the extra storage node cannot hold all the charges, and part of it will spill over to the floating diffusion.  So in that case a dual storage node is used : the extra in-pixel capacitor together with the floating diffusion.  The latter cannot be operated with CDS, but that is not a real problem, because it plays only a role in the case the charge packet is large (read : and the noise is dominated by photon shot noise).  By itself a simple and cleaver idea, BUT the sensor has a 2×1 shared pixel concept, so every photodiode is provided with an extra in-pixel storage capacitor, but for two photodiodes there is only one floating diffusion.  In other words, the idea presented in the paper can only be applied if the sensor is used in a 5 Mpixel mode instead of the announced 10 Mpixel mode.  To me this was a bit a disappointing conclusion of the paper. 
The device is realized in a 90 nm technology with 1P5M plus a light shield (so is it 1P5M or 1P6M ?).  During the Q&A more info was requested about the FPN and colour, but apparently the device cannot be used in colour mode.

The second global shutter device was presented by Tohoku University.  It was mentioned to be a 1T pixel/s device (not 1 transistor but 1 tera pixel/s !).  The device can deliver 10 Mfs in full resolution and up to 20 Mfs in half resolution mode.  The imaging area has 400 x 256 pixels and every pixel has 128 analog memory cells.  So the device does capture a limited number of 128 frames at high speed.  The memory part is organized above and below the imaging array.  The floorplan of the device looks like a split frame-transfer device (for those of you familiar to CCDs).  The memory cells are made by two capacitors in parallel : poly-poly capacitor and MOS-gate capacitor, with one common poly-layer.  The pixels are 32 um x 32 um, pretty large and have a PPD of almost 16 um in length.  During the author interview I asked the presenter how he solved the issue of image lag within such a large pixel at such a high speed.  Unfortunately I could not get the secrets unrevealed, the presenter promised to me that this will be presented at another conference.  Technology used is 2P4M, 0.18 um, and at full speed the device dissipates 24 W.  Be careful not to burn your fingers !
Amazing movies were shown to illustrate the capability of the high-speed global shutter device.  Very impressive taking into account that the work is part of a PhD project.  Congratulations !

More to come !

Albert, 27-02-2012.

Report ISSCC 2012 (2)

Thursday, February 23rd, 2012


Yesterday, Feb. 22nd, 2012 the image sensor session took place at the ISSCC.  Several very interesting papers were presented.  For a couple of subjects, there were two different papers presented.  That gives the audience the opportunity to compare two techniques with their pros and cons.  Well done by the organizing committee.

There were two papers, both from Samsung, dealing with the simultaneous capturing depth by means of Time-of-Flight sensors.  But new is the possibility to capture normal video (called RGB) and depth (called Z) information simultaneously.  Simultaneous basically means with the same sensor. 

The first solution captures RGB and Z at the same time.  The device has an image field that was composed out of two types of lines, lines sensitive and optimized for RGB and lines sensitive and optimized for Z.  So for every two lines of RGB there was one line of Z.  The two RGB lines were provided with the classical Bayer pattern, the Z line has no filter at all.  To provide the Z pixel with extra sensitivity, the width of a single Z pixels was equal to the width of 4 RGB pixels.
The pixels not only differ in size, but also in architecture.  The RGB pixels had an extra potential barrier in the silicon and underneath the pixels.  This barrier was not present underneath the Z pixels basically to extend the near-IR sensitivity, because it is the near-IR signal that is used for sensing the depth information.  It was not really clear from the paper whether there was any effort made to protect the RGB pixels from the incoming near-IR light, but in the Q&A the presenter referred to future work to put extra near-IR filters on top of the RGB pixels.

A second solution did not capture the RGB and Z at the same time, but in a sequential way with the same sensor, for instance the odd frames giving RGB and the even frames giving Z information.  The RGB pixels were organized in a 2×4 shared architecture and provided with the standard Bayer pattern.  In the case these pixels were used in the Z mode, a 4×4 binning was done (combination of the charge domain and analog domain) to increase the sensitivity of the Z pixels.  Innovative in this design was the location and sharing of the floating diffusions.  Every single RGB pixel has two floating diffusions (one left and one right of the pinned photodiode) that could be tied together with the floating diffusions of the neighbouring pixels (a kind of back-to-back architecture).  Also at the end of this paper, measurement results and images were shown, both of the RGB and Z results.  During Q&A the presenter mentioned that the RGB images shown were taken with a near-IR filter in front of the sensor and that in the Z-case the filter was removed.

So, two different sensors with different architectures were presented for the same application.  It was clear that in both situations there is still work to do to improve the performance, but nevertheless the two papers gave a clear indication in which direction Samsung (in this case) is seeking after new applications.  

More to come !

Albert, 23-02-2012.


Report ISSCC 2012 (1)

Tuesday, February 21st, 2012


Yesterday, Feb. 21st, ISSCC 2012 started in San Francisco.  In the morning there were the plenary sessions.  They did not had any specific imaging content or imaging information.  In the afternoon during the so-called medical session, a couple of imaging papers were presented.  Here follows a quick report about three of them.

H-S. Kim et al. (KAIST and SAIT) described an X-ray photon counting sensor built-up around a HgI2 photoconductor and a CMOS read out circuit.  The interesting part of the paper was the discrimination in energy level of the incoming X-rays.  In the presented solution, 3 different energy levels could be detected.  The discrimination itself was done in the analog domain by using appropriate thresholding of the peak signal generated by the incoming X-ray photon.  So it is really an X-ray photon counter and based on the peak signal that is detected by the CMOS circuitry, the energy of the X-ray can be classified.

J. Choi et al. (University of Michigan) presented a relative small image sensor that can work in 4 different modes.  In a so-called monitoring mode, the sensor works at 0.8 V in a low power mode.  In the case the sensor (autonomously) detects that it has sufficient energy, it can work at 1.8 V supply and run in a high gain mode (amplification of the signal by a factor 8), a normal mode and a high dynamic range mode (double exposure, both at half the resolution).  The beauty of the design is the fact that the switching between these various mode basically requires a different set-up of the pixel and/or column-level ADC.  This is done by a specific pixel design, such that the pixel circuitry can be used a regular source follower, or they form part of the ADC circuitry.  The basic application for this sensor can be found in wireless sensor networks.

M-T. Chung et al. (Nat. Tsing Hua Univeristy, Hsinchu) presented an ultra-low power sensor, consuming 4.95 uW at a power supply voltage of 0.5 V.  Pixel number 64 x 40 and 11.8 fps.  The sensor converts the incoming information in a pulse-width modulated output signal.  This is realized by an in-pixel comparator based on 5 transistors.  Nice work to see a device operating at 0.5 V.  The author claimed a dynamic range of 82 dB.

The afternoon session started at 1:30pm, and I entered the room around 1:20pm.  At that time all speakers were already sitting on the front rows.  The first view of these front rows scared me a bit : all black haired young guys.  For the old grey man, it was a confrontation with the fact that the new generation is ready to take over, and they mainly come from the Far East.  Nevertheless WELCOME guys, and make sure you have a lot of fun in solid-state imaging !

[If particular papers are not mentioned in my report, that only means that I did not attend the paper  presentation.  Not finding a paper review in my blog does NOT mean that the paper was of low quality !]

Albert, 21-02-2012.


How to Measure : Fixed-Pattern Noise in Light or PRNU (1)

Tuesday, February 14th, 2012


As a logical next step in the “How to Measure” discussion is to look after the non-uniformities with light input on the sensor or Photo-Response Non Uniformity (PRNU).  PRNU is the variation of the output signal from pixel to pixel in the case light is falling on the sensor.  It should be noted that the average sensor signal itself can be composed out of :

       DC offset, introduced by the electronic circuitry, and which is (in a first instance) independent of temperature and exposure time

       Dark current, depending on temperature and on exposure time,

       Photo response, depending on exposure time.

Just like in the case of the average signal, the non-uniformities are calculated (!) based on several images taken in controlled conditions.  To limit the influence of any thermal noise component, several images need to be grabbed, preferably at various exposure or integration times.  Basically, the same data or images as used in the case of measuring the average signal with light input can be reused.  So after averaging all images taken at a particular exposure time to reduce the thermal noise, calculations can take place on the averaged resulting image.  To make sure that the obtained result contains the PRNU and is not too much “contaminated” by the dark current, the amount of light put on the sensor should be large enough to make sure that the photon-generated signal is at least two orders of magnitude larger than the dark-current generated signal.

The images used in the calculation of the PRNU are shown in Figure 1 : for 25 different exposure times, the result is visualized in the mosaic image.  Corresponding exposure times are indicated.



Figure 1 : Sensor output with light input as a function of the exposure time.

The light input conditions are : 5600K colour temperature and 5 lux light input on the sensor surface.  As can be seen from Figure 1, the sensor saturates around 400 ms.  This is due to the limitation of the ADC in combination of the gain setting of the camera.  These effects will be explained and measured later in another blog.  

A first way of measuring/calculating the PRNU is to check its behavior as a function of exposure time.  The result of this is shown in figure 2.



Figure 2 : fixed-pattern noise with light as a function of the exposure time.

There are four curves shown, one for each colour channel.  Please notice that these curves for the PRNU are obtained after correction of the defect pixels !

From the regression line calculated by means of the linear part of the various curves, the following data can be extracted :

       FPN independent of the exposure time, or FPN in dark, being equal to 3.09 DN, (average of the two green channels),

       Time depending part of the FPN, being the PRNU, and equal to 214.4 DN/s, (average of the two green channels).

Taking into account the data obtained (in the previous blog) for the average signal in the various colour channels, the PRNU is equal to :

       Blue channel : 93.6/8770 = 1.06 %,

       Green in the blue line : 211.7/12653 = 1.67 %,

       Green in the red line : 217.0/12715 = 1.72 %,

       Red channel : 170.1/7612.8 = 2.23 %.

How to express the FPN a sensor or camera ?  In contradiction to the DSNU, the PRNU normally is Gaussian distributed (after correction of the defects and shading, see next blog).  For that reason it is straight forward to express the PRNU as a percentage of the average signal with light input.  This is also done in the above mentioned calculation.  To show the Gaussian distribution of the sensor signal with light input, the histogram of the output (at 25 % of saturation) is illustrated in Figure 3.  The left group of curves illustrates the histogram with a linear vertical axis, the right group of curves shows the same data but with a logarithmic vertical axis.  The latter one is preferred because it shows much better the distribution of deviating pixels (if any) as well.



Figure 3 : histogram of the average signal in dark.

“There is a warning sign on the road ahead” :

Of crucial importance in this measurement is to measure the PRNU and not the non-uniformity of the light source.  For that reason special attention is needed to create a uniform illumination.  This can be done by :

       Using a point source at a large distance, but in that case the light input will be relatively small,

       Making use of a diffuser in front of the image sensor,

       Imaging a uniform target on the sensor, but in that case the non-uniformity of the lens will be included,

       Using an integrating sphere.  This is most probably the easiest solution, although also integrating spheres do not have a uniformity of 100 %.

In the case of a large imaging array, creating a uniform illumination might be complicated.  In that case a smaller area of the sensor can be used (in the center of the device) and the PRNU can be characterized across this smaller area.  It should be noted that the PRNU values of a smaller area are always more optimistic than the PRNU values of the total sensor area.

Good luck with the PRNU measurements, more to follow next time.

Albert, 14-02-2012.