Archive for February, 2014

How to Measure Modulation Transfer Function (1)

Thursday, February 20th, 2014

In a simple wording, the modulation transfer function or MTF is a measure of the spatial resolution of an imaging component.  The latter can be an image sensor, a lens, a mirror or the complete camera.  In technical terms, the MTF is the magnitude of the optical transfer function, being the Fourier transform of the response to a point illumination.

The MTF is not really the most easiest measurement that can be done on an imaging system.  Various methods can be used to characterize the MTF, such as the “slit image”, the “knife edge”, the “laser-speckle technique” and “imaging of sine-wave patterns”.  It should be noted that all method listed, except the “laser-speckle technique, measure the MTF of the complete imaging system : all parts of the imaging system are included, such as lens, filters (if any present), cover glass and image sensor.  Even the effect of the processing of the sensor’s signal can have an influence on the MTF, and will be include in the measurement.

In this first MTF-blog the measurement of the modulation transfer function based on imaging with a sine-wave pattern will be discussed.  It should be noted that in this case dedicated testcharts are used to measure the MTF, but the pattern on the chart should sinusoidally change between dark parts and light parts.  In the case a square-wave pattern is used, not the MTF but the CTF (= Contrast Transfer Function) will be measured.  And the values obtained  for the CTF will be larger than the ones obtained for the MTF.

The method described here is based on the work of Anke Neumann, written down in her MSc thesis “Verfahren zur Aufloesungsmessung digitaler Kameras”, June 2003.  The basic idea is to use a single testchart with a co-called Siemens-star.  An example of such a testchart is illustrated in Figure 1.

Figure 1 : Output image of the camera-under-test observing the Siemens star.

(Without going further into detail, the testchart contains more structures than used in the reported measurement performed for the MTF.)  The heart of the testchart is the Siemens star with 72 “spokes”.  As can be seen the distance between the black and white structures on the chart is becoming larger if one moves away from the center of the chart.  In other words, the spatial frequency of the sinusoidal pattern is becoming lower at the outside of the Siemens star, and is becoming higher closer to the center of the Siemens star.  Around the center of the Siemens star, the spatial frequency of the sinusoidal pattern is even too high to be resolved by the camera-under-test and aliasing shows up.  In the center of the Siemens star a small circle is included with 2 white and two black quarters.  These are going to play a very important role in the measurements.

The measurement procedure goes as follows :

  1. Focus the image of the Siemens star, placed in front of the camera, as good as possible on the imager.  Try to bring the Siemens star as close as possible to the edges (top and bottom) of the imager,
  2. Shoot an image of the testchart (in the example described here, 50 images were taken and averaged to limit the temporal noise).

In principle, these two steps is all one needs to be able to measure/calculate the MTF.  But to obtain a higher accuracy of the measurements, the following additional steps might be required :

  1. Cameras can operate with or without a particular offset corrected/added to the output signal.  For that reason it might be wise to take a dark reference frame to measure the offset and dark signal (including its non-uniformities) for later correction.  In the experiments discussed here, 50 dark frames were taken and averaged to minimize the temporal noise.
  2. The data used in the measurement is coming from a relatively large area of the sensor and is relying on an uniform illumination of the complete Siemens star.  Moreover, the camera is using a lens and one has to take into account the lens vignetting or intensity fall-off towards the corners of the sensor.  For that reason a flat-fielding operation might be needed : take an image of a uniform test target, and use the data obtained to create a pixel gain map.  In the experiments discussed her, 50 flat field images were taken and averaged to minimize the temporal noise.
  3. The camera under test in this discussion delivers RAW data, without any processing.  If that was not the case it would have been worthwhile to check the linearity of the camera (e.g. use of a gamma correction) by means of the grey squares present on the testchart as well.

Taken all together the total measurement sequence of the MTF characterization is then composed of :

  1. Shoot 50 images of the focused testchart, and calculate the average.  The result is called : Image_MTF,
  2. Shoot 50 flat field images with the same illumination as used to shoot the images of the focused testchart, and calculate the average image of all flat field images. The result is called : Image _light,
  3. Shoot 50 images in dark, and calculate the average image of all dark image.  The result is called Image_dark,
  4. Both Image­_MTF and Image_light are corrected for their offset and dark non-uniformities by subtracting Image_dark,
  5. The obtained correction (Image_lightImage_dark) will be used to create a gain map for each pixel, called Image_gain,
  6. The obtained correction (Image_MTF Image_dark) will be corrected again for any non-uniformities in pixel illumination, based on Image_gain.

If this sequence is followed, an image like the one shown in Figure 1 can be obtained.

  1. Next the pixel coordinates of the center of the testchart need to be found.  This can be done manually or automatically.  The latter is done in this work, based on the presence of the 4 quadrants in the center of the testchart.
  2. Once the centroid of the testchart is known, several concentric circles are drawn with the centroid of the testchart as their common center.  An example of these concentric circles on top of the testchart is shown in Figure 2.

 

Figure 2 : Siemens star with concentric circles (shown in green), with their centers coincides with the centroid of the testchart (red cross).

  1. After creating the circles, the sensor output values of the various pixels lying on these circles are checked.  On every circle the pixel values change according to a sine wave, of which the frequency is known (72 complete cycles of the sine wave and it radius, in number of pixels, can be calculated).  For each of the circles, a theoretical sine wave can be fitted through the measured data.  Consequently for each circle a parameter can be found that corresponds to the amplitude of the fitted sine wave.
  2. In principle the MTF curve could be constructed, the only missing link is the value of the MTF for very low frequencies close to DC.  This value can be found as the difference between the white values and black values of the four quadrants right in the middle of the testchart.
  3. Normalizing the obtained data completes the MTF curve : the calculated amplitudes of the sine waves are normalized with the signals of the four quadrants in the middle of the chart, the frequencies of the sine waves are normalized to the sampling frequency of the imager (6 mm pixel pitch).

The outcome of the complete exercise is shown in Figure 3.

 Figure 3 : Modulation Transfer Function of the camera-under-test.

As indicated in Figure 3, the MTF measurement is done with white light created by 3 colour LED arrays (wavelengths 470 nm, 525 nm, 630 nm).  As can be seen from the curve, the camera has a relative low MTF, around 8 % at Nyquist frequency (fN).  In theory an imager with a large fill factor can have an MTF value of 60 % at fN.  But this camera is performing far away from this theoretical value.  But one should not forget, this MTF measurement does include ALL components in the imaging system, not just the sensor !

Now that the MTF measurement method is explained, in the next blogs more MTF results will be shown and compared.

Albert, 20-02-2014.

International Solid-State Circuits Conference 2014

Wednesday, February 12th, 2014

 

The image sensors’ harvest at the ISSCC 2014 was pretty weak this year.  Only half of a session was devoted to imagers.  In the past, 2 full sessions were filled with imager presentations …

Samsung presented their latest development in the field : a BSI-CMOS pixel with deep trench isolations/separations and with a so-called vertical transfer gate.

  1. The DTI is a very narrow, but very deep trench into the silicon.  These trenches completely surround the individual pixels.  Moreover they go through the complete sheet of silicon (after back-side thinned this is just a few microns).  The trenches seem to completely eliminate the optical and electrical cross-talk in the silicon.  CCM coefficients of devices without and devices with the DTI were shown, and the CCM of the DTI device comes much closer to the unity matrix.  This results in a much better SNR after colour processing.  The trenches seem to be filled with poly-silicon, which in its turn is isolated from the main silicon by an oxide.  Although not confirmed by the speaker, it is expected that the poly-silicon gates are used to bias the silicon of the pixel into accumulation to lower the dark current.  The dark current of the DTI pixel was equal to the dark current of the standard pixel without DTI.

    Because the pixels are 100 % isolated from each other, blooming is simply not possible.  This is an extra advantage of the DTI structure.

  2. The vertical transfer gate : the photodiode is not directly located at the silicon interface, but is buried into the silicon.  Above it, the transfer gate is located as well as the FD node.  So at the end of the exposure, the charges have to be transported upwards out of the diode into the FD node.  This buried diode results in a remarkably high full well of 6200 electrons for a 1.12 um pixel with DTI.

According to the speaker, Samsung is ready for the next generation of pixels below 1 um.  Two personal remarks :

  1. I would love to see this pixel in combination with the light guide between the colour filters, presented by Panasonic a few years ago at IEDM.  That should result in a device without spectral, optical and electrical cross-talk.
  2. These devices are great masterpieces of integration in the 3rd dimension, and not that much silicon is left anymore.

There was also a nice presentation by Microsoft of their ToF device.  They are using a non-conventional pixel : four toggling, small gates with wide open areas in between.  At the “head” of the gates, the FD nodes are located.  Pixels are being readout and processed in full differential mode and had the option to partly being reset during exposure.  This removes the background illumination.

The circuitry around the pixels allows the pixels to run at :

  • different shutter and different gain settings, resulting is an expanded dynamic range,
  • multiple modulation frequencies, solving the conflict of precision and depth range,
  • multi-phase operation, resulting in high accuracy and robustness.

The device realized has a depth accuracy of 0.5 % within a range of 0.8 m … 4.2 m.

Albert.

February 12th, 2014.

Electronic Imaging 2014 (2)

Saturday, February 8th, 2014

Boukhayma (CEA-Leti) presented a very nice paper about the noise in PPD-based sensors.  He modelled the electronic pixel components for their noise performance, and based on this analysis he handed out some guidelines to limit the noise level in the pixels.  Although not all conclusions are/were new, it was nice to see them all listed in one presentation and supported by simulation results : lower the FD node capacitance, lower the width of the SF transistor, choose a p-MOS transistor as SF because an n-MOS transistor will give too much 1/f noise, optimize the length of the SF (depending on gate-drain capacitance, gate-source capacitance, FD capacitance and width of the transistor, the formula for the optimum gate length was shown).  If the thermal noise of the pixel is dominant, it does not matter whether to use a simple SF in the pixel or to use an in-pixel gain stage.  But if the 1/f noise is dominant, one should avoid a standard n-type MOS transistor.

Angel Rodriguez-Vazques (IMSE-CNM, Spain) gave a nice overview of ADC architectures when used for image sensors.  It is a pity that for such an overview paper only 20 minutes presentation time were provided.  These kind of overview papers deserve to get more time.

Seo (Shizuoka University, Japan) described a pixel without STI, but with isolation between the pixels based on p-wells and p+ channel stops.  The omission of the STI has to do with the dark current issues that come together with the STI.  The authors showed a very cute lay-out of a 2×2 shared pixel concept (1.75 T/cell).  All transistors and transfer gates were ring shaped, located in the center of the 2×2 pixel and with the 4 PPDs at the outside, looks a bit like a spider with only 4 legs.  The pixels were pretty large (7.5 x 7.5 um2), in combination with a relatively low fillfactor of 43 %, as well as a low conversion gain of 23 uV/electron.  Of course the ring structure of the output transistors consumes a large amount of silicon, and seems to result in a relative large floating diffusion capacitance.  The dark current is reduced by a factor of 20 (compared to the STI-based sensor), down to 30 pA/cm2, QE = 68% @ 600 nm.

It is hard to decide who will win the Award for the most artistic pixel lay-out : the hedgehog of Tohoku University or the spider of Shizuoka University ?  But it any case, the award goes to a Japanese university.  Great work !

Albert

February 7th, 2014.

Electronic Imaging 2014 (1)

Friday, February 7th, 2014

An interesting paper of Tohoku University was presented at the EI14.  They published their paper about a 20M frame/s sensor already a while ago at the ISSCC, but they never disclosed the pixel structure to empty the PPD within the extremely short frame times.  The EI14 paper was focusing on the pixel architecture and specifically on the PPD structure.  Miyauchi explained that two technological “tricks” are applied to create an electric field in the PPD to speed up the transfer of the photon-generated electrons from the PPD to the FD node.  Firstly a gradient in the n-doping is implemented by using three different n-dopings, secondly the n-regions are not simple rectangulars or a squares, but have the look of hedgehogs with all kind of sharp needles extending away from the FD node.  On one hand the lay-out of the triple n-implantation looks quite complicated, on the other hand it looks quite funny as well, but after all, it seems to be effective.

Simulations as well as measurement results were shown.  Simulated was a worst-case transfer time of 9ns, measured was a transfer time of about 5 ns.  These are very spectacular results taking into account that the pixel size is 32 x 32 um2.  As far as overall speed of the sensor is concerned : 10M frames/s are reported for a device with 100k pixels, 128 on-chip storage nodes for every pixel and a consumed power of 10W.  The device can also run in 50k pixels mode, with the same power consumption but then with a frame rate of 20M frames/s and with a storage capacity of 256 frames on-chip.

 

There were two papers that used the same image sensor concept : allow the pixels to integrate up to a particular saturation level, and record the time it takes to come to this point.  This idea is not really new (was it Orly who did this for the first time in her conditional reset idea ?), but the idea in which this concept is applied seems to be new.

El-Desouki (King Abdulaziz City for Science and Technology, Saudi Arabia) is using SPADs and is allowing the SPADs to count up its events to a certain defined number to convert an amount of light into a time slot, measures this time slot by converting it into the digital domain and is sending out this data.  A further sophistication of the idea is not to count in the digital domain (it needs too many transistors per pixel) but to do the counting in the analog domain.  Finally the author explained how one can make a TDI sensor based on this concept.

A bit more “out-of-the-box” was the concept introduced by Dietz (University of Kentucky).  Allow the pixels to integrate up to a certain level (e.g. saturation), record the time it takes to reach that point, and perform this action continuously in the time domain.  In this way one gets, for each pixel, a kind of analog signal describing the behavior of each pixel in the time domain.  This way of operating the pixel makes the sensor completely free of any frame rate.  If an image is needed, one can take whatever timeslot in the time domain that is recorded, take the analog signal out of the memory, and average the analog signal within this timeslot.  Of course every pixel needs a lot of processing as well as a huge storage space to record its behavior in the time domain.  But with the stacked concept of imager-processor-memory, the speaker was convinced that in the future this should be feasible.

Yonai (NHK, Japan) presented some new results obtained with the existing 33M UHDT sensor, already presented earlier in WKA winning paper.  But this time the authors changed the timing diagram such that the sensor was allowed to perform digital CDS off-chip.  Results : 50 times reduction in FPN (down to 1 electron) and 2 times reduction in thermal noise (down to 3 electrons @ 60 fr/s).

Kang (Samsung) presented some further sophistication of the RGB-Z sensor that was already presented by Kim at the ISSCC.  From one single imager, a normal RGB image can be generated, as well as a depth-map by using the imager in a ToF mode.  The author presented a simple, but intelligent technique to improve the performance of the device by removing any asymmetry in pixel design/lay-out/fabrication.  The technique applied is simply reversing the Q0 and Q180 from frame to frame.  Actually the technique looks very much the same as chopping in analog circuitry.

 

Albert

February 7th, 2014.