Announcement of the SECOND IMAGING FORUM, Dec. 11-12, 2014.

April 6th, 2014

Mark now already your agenda for the second solid-state imaging forum, scheduled for Dec. 11-12, 2014.

After the succesful first forum in 2013, I am happy to announce a second one.  Also this second solid-state imaging forum will be a high-level, technical, short course focusing on one particular hot topic in the field of solid-state imaging.  The audience will be strictly limited to 28 people, just to stimulate as much as possible the interaction between the participants and speaker(s).  The subject of the second forum will be : “Advanced Digital Image Processing”.

More information about the speaker and the agenda of the second forum will follow in the coming weeks, but I wanted to share this announcement with you as early as possible to make sure you can keep your agenda free on these days.

Albert,

April 6th, 2014.

How to Measure Modulation Transfer Function (2)

March 25th, 2014

In the previous blog, the measurement of the Modulation Transfer Function by means of the Siemens star was explained.  In this blog, this method will be applied to check out the effect of the lens F-number on the MTF.

In Figure 1 the result of the MTF measurement is shown.

 

Figure 1 : Modulation Transfer Function for two settings of the lens F-number.

It is quite nice to see the influence of the F-number :

  • a low F-number is referring to a large lens opening (= a lot of light goes to the sensor, a short exposure time is needed), and in that case the incoming light is reaching the sensor with a large chief-ray angle (= deviation from the normal),
  • a large F-number is referring to a small lens opening (= much less light goes to the sensor, a long exposure time is needed), and in that case the incoming light is reaching the sensor with a small chief-ray angle (= almost perpendicular to the sensor).

Light that is perpendicularly reaching the sensor will suffer less from optical cross-talk in comparison to light that is reaching the sensor under a certain angle and is deviating more from the normal.  More (optical) cross-talk does result in less contrast between neighbouring pixels, thus lowering the MTF for larger spatial frequencies.  And this effect is observed in Figure 1 !

Next time something about colour and MTF.

Albert, 25-03-2014.

How to Measure Modulation Transfer Function (1)

February 20th, 2014

In a simple wording, the modulation transfer function or MTF is a measure of the spatial resolution of an imaging component.  The latter can be an image sensor, a lens, a mirror or the complete camera.  In technical terms, the MTF is the magnitude of the optical transfer function, being the Fourier transform of the response to a point illumination.

The MTF is not really the most easiest measurement that can be done on an imaging system.  Various methods can be used to characterize the MTF, such as the “slit image”, the “knife edge”, the “laser-speckle technique” and “imaging of sine-wave patterns”.  It should be noted that all method listed, except the “laser-speckle technique, measure the MTF of the complete imaging system : all parts of the imaging system are included, such as lens, filters (if any present), cover glass and image sensor.  Even the effect of the processing of the sensor’s signal can have an influence on the MTF, and will be include in the measurement.

In this first MTF-blog the measurement of the modulation transfer function based on imaging with a sine-wave pattern will be discussed.  It should be noted that in this case dedicated testcharts are used to measure the MTF, but the pattern on the chart should sinusoidally change between dark parts and light parts.  In the case a square-wave pattern is used, not the MTF but the CTF (= Contrast Transfer Function) will be measured.  And the values obtained  for the CTF will be larger than the ones obtained for the MTF.

The method described here is based on the work of Anke Neumann, written down in her MSc thesis “Verfahren zur Aufloesungsmessung digitaler Kameras”, June 2003.  The basic idea is to use a single testchart with a co-called Siemens-star.  An example of such a testchart is illustrated in Figure 1.

Figure 1 : Output image of the camera-under-test observing the Siemens star.

(Without going further into detail, the testchart contains more structures than used in the reported measurement performed for the MTF.)  The heart of the testchart is the Siemens star with 72 “spokes”.  As can be seen the distance between the black and white structures on the chart is becoming larger if one moves away from the center of the chart.  In other words, the spatial frequency of the sinusoidal pattern is becoming lower at the outside of the Siemens star, and is becoming higher closer to the center of the Siemens star.  Around the center of the Siemens star, the spatial frequency of the sinusoidal pattern is even too high to be resolved by the camera-under-test and aliasing shows up.  In the center of the Siemens star a small circle is included with 2 white and two black quarters.  These are going to play a very important role in the measurements.

The measurement procedure goes as follows :

  1. Focus the image of the Siemens star, placed in front of the camera, as good as possible on the imager.  Try to bring the Siemens star as close as possible to the edges (top and bottom) of the imager,
  2. Shoot an image of the testchart (in the example described here, 50 images were taken and averaged to limit the temporal noise).

In principle, these two steps is all one needs to be able to measure/calculate the MTF.  But to obtain a higher accuracy of the measurements, the following additional steps might be required :

  1. Cameras can operate with or without a particular offset corrected/added to the output signal.  For that reason it might be wise to take a dark reference frame to measure the offset and dark signal (including its non-uniformities) for later correction.  In the experiments discussed here, 50 dark frames were taken and averaged to minimize the temporal noise.
  2. The data used in the measurement is coming from a relatively large area of the sensor and is relying on an uniform illumination of the complete Siemens star.  Moreover, the camera is using a lens and one has to take into account the lens vignetting or intensity fall-off towards the corners of the sensor.  For that reason a flat-fielding operation might be needed : take an image of a uniform test target, and use the data obtained to create a pixel gain map.  In the experiments discussed her, 50 flat field images were taken and averaged to minimize the temporal noise.
  3. The camera under test in this discussion delivers RAW data, without any processing.  If that was not the case it would have been worthwhile to check the linearity of the camera (e.g. use of a gamma correction) by means of the grey squares present on the testchart as well.

Taken all together the total measurement sequence of the MTF characterization is then composed of :

  1. Shoot 50 images of the focused testchart, and calculate the average.  The result is called : Image_MTF,
  2. Shoot 50 flat field images with the same illumination as used to shoot the images of the focused testchart, and calculate the average image of all flat field images. The result is called : Image _light,
  3. Shoot 50 images in dark, and calculate the average image of all dark image.  The result is called Image_dark,
  4. Both Image­_MTF and Image_light are corrected for their offset and dark non-uniformities by subtracting Image_dark,
  5. The obtained correction (Image_lightImage_dark) will be used to create a gain map for each pixel, called Image_gain,
  6. The obtained correction (Image_MTF Image_dark) will be corrected again for any non-uniformities in pixel illumination, based on Image_gain.

If this sequence is followed, an image like the one shown in Figure 1 can be obtained.

  1. Next the pixel coordinates of the center of the testchart need to be found.  This can be done manually or automatically.  The latter is done in this work, based on the presence of the 4 quadrants in the center of the testchart.
  2. Once the centroid of the testchart is known, several concentric circles are drawn with the centroid of the testchart as their common center.  An example of these concentric circles on top of the testchart is shown in Figure 2.

 

Figure 2 : Siemens star with concentric circles (shown in green), with their centers coincides with the centroid of the testchart (red cross).

  1. After creating the circles, the sensor output values of the various pixels lying on these circles are checked.  On every circle the pixel values change according to a sine wave, of which the frequency is known (72 complete cycles of the sine wave and it radius, in number of pixels, can be calculated).  For each of the circles, a theoretical sine wave can be fitted through the measured data.  Consequently for each circle a parameter can be found that corresponds to the amplitude of the fitted sine wave.
  2. In principle the MTF curve could be constructed, the only missing link is the value of the MTF for very low frequencies close to DC.  This value can be found as the difference between the white values and black values of the four quadrants right in the middle of the testchart.
  3. Normalizing the obtained data completes the MTF curve : the calculated amplitudes of the sine waves are normalized with the signals of the four quadrants in the middle of the chart, the frequencies of the sine waves are normalized to the sampling frequency of the imager (6 mm pixel pitch).

The outcome of the complete exercise is shown in Figure 3.

 Figure 3 : Modulation Transfer Function of the camera-under-test.

As indicated in Figure 3, the MTF measurement is done with white light created by 3 colour LED arrays (wavelengths 470 nm, 525 nm, 630 nm).  As can be seen from the curve, the camera has a relative low MTF, around 8 % at Nyquist frequency (fN).  In theory an imager with a large fill factor can have an MTF value of 60 % at fN.  But this camera is performing far away from this theoretical value.  But one should not forget, this MTF measurement does include ALL components in the imaging system, not just the sensor !

Now that the MTF measurement method is explained, in the next blogs more MTF results will be shown and compared.

Albert, 20-02-2014.

International Solid-State Circuits Conference 2014

February 12th, 2014

 

The image sensors’ harvest at the ISSCC 2014 was pretty weak this year.  Only half of a session was devoted to imagers.  In the past, 2 full sessions were filled with imager presentations …

Samsung presented their latest development in the field : a BSI-CMOS pixel with deep trench isolations/separations and with a so-called vertical transfer gate.

  1. The DTI is a very narrow, but very deep trench into the silicon.  These trenches completely surround the individual pixels.  Moreover they go through the complete sheet of silicon (after back-side thinned this is just a few microns).  The trenches seem to completely eliminate the optical and electrical cross-talk in the silicon.  CCM coefficients of devices without and devices with the DTI were shown, and the CCM of the DTI device comes much closer to the unity matrix.  This results in a much better SNR after colour processing.  The trenches seem to be filled with poly-silicon, which in its turn is isolated from the main silicon by an oxide.  Although not confirmed by the speaker, it is expected that the poly-silicon gates are used to bias the silicon of the pixel into accumulation to lower the dark current.  The dark current of the DTI pixel was equal to the dark current of the standard pixel without DTI.

    Because the pixels are 100 % isolated from each other, blooming is simply not possible.  This is an extra advantage of the DTI structure.

  2. The vertical transfer gate : the photodiode is not directly located at the silicon interface, but is buried into the silicon.  Above it, the transfer gate is located as well as the FD node.  So at the end of the exposure, the charges have to be transported upwards out of the diode into the FD node.  This buried diode results in a remarkably high full well of 6200 electrons for a 1.12 um pixel with DTI.

According to the speaker, Samsung is ready for the next generation of pixels below 1 um.  Two personal remarks :

  1. I would love to see this pixel in combination with the light guide between the colour filters, presented by Panasonic a few years ago at IEDM.  That should result in a device without spectral, optical and electrical cross-talk.
  2. These devices are great masterpieces of integration in the 3rd dimension, and not that much silicon is left anymore.

There was also a nice presentation by Microsoft of their ToF device.  They are using a non-conventional pixel : four toggling, small gates with wide open areas in between.  At the “head” of the gates, the FD nodes are located.  Pixels are being readout and processed in full differential mode and had the option to partly being reset during exposure.  This removes the background illumination.

The circuitry around the pixels allows the pixels to run at :

  • different shutter and different gain settings, resulting is an expanded dynamic range,
  • multiple modulation frequencies, solving the conflict of precision and depth range,
  • multi-phase operation, resulting in high accuracy and robustness.

The device realized has a depth accuracy of 0.5 % within a range of 0.8 m … 4.2 m.

Albert.

February 12th, 2014.

Electronic Imaging 2014 (2)

February 8th, 2014

Boukhayma (CEA-Leti) presented a very nice paper about the noise in PPD-based sensors.  He modelled the electronic pixel components for their noise performance, and based on this analysis he handed out some guidelines to limit the noise level in the pixels.  Although not all conclusions are/were new, it was nice to see them all listed in one presentation and supported by simulation results : lower the FD node capacitance, lower the width of the SF transistor, choose a p-MOS transistor as SF because an n-MOS transistor will give too much 1/f noise, optimize the length of the SF (depending on gate-drain capacitance, gate-source capacitance, FD capacitance and width of the transistor, the formula for the optimum gate length was shown).  If the thermal noise of the pixel is dominant, it does not matter whether to use a simple SF in the pixel or to use an in-pixel gain stage.  But if the 1/f noise is dominant, one should avoid a standard n-type MOS transistor.

Angel Rodriguez-Vazques (IMSE-CNM, Spain) gave a nice overview of ADC architectures when used for image sensors.  It is a pity that for such an overview paper only 20 minutes presentation time were provided.  These kind of overview papers deserve to get more time.

Seo (Shizuoka University, Japan) described a pixel without STI, but with isolation between the pixels based on p-wells and p+ channel stops.  The omission of the STI has to do with the dark current issues that come together with the STI.  The authors showed a very cute lay-out of a 2×2 shared pixel concept (1.75 T/cell).  All transistors and transfer gates were ring shaped, located in the center of the 2×2 pixel and with the 4 PPDs at the outside, looks a bit like a spider with only 4 legs.  The pixels were pretty large (7.5 x 7.5 um2), in combination with a relatively low fillfactor of 43 %, as well as a low conversion gain of 23 uV/electron.  Of course the ring structure of the output transistors consumes a large amount of silicon, and seems to result in a relative large floating diffusion capacitance.  The dark current is reduced by a factor of 20 (compared to the STI-based sensor), down to 30 pA/cm2, QE = 68% @ 600 nm.

It is hard to decide who will win the Award for the most artistic pixel lay-out : the hedgehog of Tohoku University or the spider of Shizuoka University ?  But it any case, the award goes to a Japanese university.  Great work !

Albert

February 7th, 2014.

Electronic Imaging 2014 (1)

February 7th, 2014

An interesting paper of Tohoku University was presented at the EI14.  They published their paper about a 20M frame/s sensor already a while ago at the ISSCC, but they never disclosed the pixel structure to empty the PPD within the extremely short frame times.  The EI14 paper was focusing on the pixel architecture and specifically on the PPD structure.  Miyauchi explained that two technological “tricks” are applied to create an electric field in the PPD to speed up the transfer of the photon-generated electrons from the PPD to the FD node.  Firstly a gradient in the n-doping is implemented by using three different n-dopings, secondly the n-regions are not simple rectangulars or a squares, but have the look of hedgehogs with all kind of sharp needles extending away from the FD node.  On one hand the lay-out of the triple n-implantation looks quite complicated, on the other hand it looks quite funny as well, but after all, it seems to be effective.

Simulations as well as measurement results were shown.  Simulated was a worst-case transfer time of 9ns, measured was a transfer time of about 5 ns.  These are very spectacular results taking into account that the pixel size is 32 x 32 um2.  As far as overall speed of the sensor is concerned : 10M frames/s are reported for a device with 100k pixels, 128 on-chip storage nodes for every pixel and a consumed power of 10W.  The device can also run in 50k pixels mode, with the same power consumption but then with a frame rate of 20M frames/s and with a storage capacity of 256 frames on-chip.

 

There were two papers that used the same image sensor concept : allow the pixels to integrate up to a particular saturation level, and record the time it takes to come to this point.  This idea is not really new (was it Orly who did this for the first time in her conditional reset idea ?), but the idea in which this concept is applied seems to be new.

El-Desouki (King Abdulaziz City for Science and Technology, Saudi Arabia) is using SPADs and is allowing the SPADs to count up its events to a certain defined number to convert an amount of light into a time slot, measures this time slot by converting it into the digital domain and is sending out this data.  A further sophistication of the idea is not to count in the digital domain (it needs too many transistors per pixel) but to do the counting in the analog domain.  Finally the author explained how one can make a TDI sensor based on this concept.

A bit more “out-of-the-box” was the concept introduced by Dietz (University of Kentucky).  Allow the pixels to integrate up to a certain level (e.g. saturation), record the time it takes to reach that point, and perform this action continuously in the time domain.  In this way one gets, for each pixel, a kind of analog signal describing the behavior of each pixel in the time domain.  This way of operating the pixel makes the sensor completely free of any frame rate.  If an image is needed, one can take whatever timeslot in the time domain that is recorded, take the analog signal out of the memory, and average the analog signal within this timeslot.  Of course every pixel needs a lot of processing as well as a huge storage space to record its behavior in the time domain.  But with the stacked concept of imager-processor-memory, the speaker was convinced that in the future this should be feasible.

Yonai (NHK, Japan) presented some new results obtained with the existing 33M UHDT sensor, already presented earlier in WKA winning paper.  But this time the authors changed the timing diagram such that the sensor was allowed to perform digital CDS off-chip.  Results : 50 times reduction in FPN (down to 1 electron) and 2 times reduction in thermal noise (down to 3 electrons @ 60 fr/s).

Kang (Samsung) presented some further sophistication of the RGB-Z sensor that was already presented by Kim at the ISSCC.  From one single imager, a normal RGB image can be generated, as well as a depth-map by using the imager in a ToF mode.  The author presented a simple, but intelligent technique to improve the performance of the device by removing any asymmetry in pixel design/lay-out/fabrication.  The technique applied is simply reversing the Q0 and Q180 from frame to frame.  Actually the technique looks very much the same as chopping in analog circuitry.

 

Albert

February 7th, 2014.

Merry Christmas and Happy New Year

December 20th, 2013

Good Bye 2013 !  The year is almost over.  And as I did in the foregoing years, also this time I would like to take a quick look back and see what 2013 brought to us.  Also now I can repeat that the year 2013 was again a great year for Harvest Imaging !  The year started with the move towards a new office space.  In the meantime all furniture, equipment and infrastructure is installed and in operation.  So most of the blogs you could read this year were “born” from my new office space.  This is especially true for the blogs that contained measurement data.

If I overlook the “products” of Harvest Imaging, I can split them up into three groups :

  • The training courses, in-house as well as public courses.  It is and remains amazing and sometimes hard to believe where all the people are coming from that attend the courses.  In 2013 I had a training almost every other week, and I just completed course number 150 !  It is very motivating to experience that so many young engineers step into the challenging but very rewarding world of imaging,
  • The consulting activities.  I hope that my readers do understand that I cannot elaborate on this because of confidentiality reasons.  But I can indicate that my expertise was used in the field of imaging technology as well as intellectual property related projects,
  • The new product of Harvest Imaging, being the organization of the Solid-State Imaging Forum.  The very first edition of this forum was organized this December, focusing on “ADCs for Imagers”.  It was really a success and the large attendance proofs that there is a need for this kind of in-depth information and knowledge exchange.

To conclude this overview of products, it is a pleasure for me to thank all my customers who brought business to Harvest Imaging, in one way or another.  It is great to experience your trust and confidence by consulting the expertise of Harvest Imaging.  Thanks very much !

2013 is an odd number, and it inherently translates into another International Image Sensor Workshop, this time in the USA.  My friends in the field, Boyd Fowler, Eric Fossum and Gennadyi Agranov, organized another great Workshop.  Location was Snow Bird in Utah, where all technical information was exchanged, distributed and absorbed (literally) at a very high level.  Although again the technical and scientific level of the Workshop was outstanding, the highlight for me was the “meet and greet” with Michael Tompsett, the real inventor of the CCD image sensor.  He gave a very impressive overview of his history in the CCD imaging world and clearly explained to the audience that the 2009 Nobel Prize for the invention of the CCD image sensor went to the wrong person.  Thanks to the chairs of the Workshop to take the initiative to invite Michael Tompsett !

To conclude, I wish all of you the very best for 2014, and hope that we will regularly “meet” through this blog.  Thanks for visiting the website of Harvest Imaging, hopefully see you next year.  Welcome 2014 !

Albert, 20-12-2013.

 

How to Measure Full Well Capacity (3)

December 6th, 2013

From the two foregoing discussions on the full well capacity, it could be learned that :

-       In the case the full well is determined/limited by the ADC, comparable results for the FWC can be obtained by means of linearity measurements as well as from the mean-variance method,

-       In the case the full well is not determined/limited by the ADC, the results obtained from the linearity measurements show larger full well values than the ones obtained from the mean-variance method.

To explain the discrepancy between the FWC data of the latter case, one should realize that when the average output signal turns into saturation, a few non-uniformity issues are simultaneously popping up :

-       PRNU or photo-response non-uniformities : the pixels with the highest sensitivity can reach saturation first,

-       Non-uniformities in saturation level, some pixels will saturate at a lower FWC than others,

-       It is not clear from the measurements which part of the pixel is causing the saturation : the pinned-photodiode, the floating diffusion capacitance, the output swing limitation of the source-follower, output limitation swing of the analog circuitry.  Moreover, all these limitations can interfere with each other, which makes the situation even more complex to understand and explain.

To find out what is going on, the fixed-pattern noise is measured, and some interesting results were obtained.  The analog gain is put to a low value, and the reference voltage of the ADC is set to a higher voltage (the reference voltage is defining the analog input voltage that corresponds to an output of all “1”s).  In this way the ADC is not limiting the output swing, neither defining the FWC.

The measurement results are shown in the figure 1 : the left axis indicates the average output signal of 100 x 100 pixels as a function of the integration/exposure time; the right axis shows the fixed-pattern noise obtained from these 100 x 100 pixels, also as a function of the exposure time.

Figure 1 : Average sensor output and fixed pattern noise as a function of exposure time for a window of 100 x 100 pixels.

Some interesting details can be revealed from the FPN data :

-       For very low values of the signal (exposure time < 1 ms), the FPN shows a kind of plateau, indicating the FPN in dark,

-       For moderate values of the signal (1 ms < exposure time <12 ms), the FPN linearly increases, determined to the PRNU, the latter is proportional to the average signal value,

-       For higher values of the signal, in the region where the output signal tends to saturate (12 ms < exposure time < 16 ms), the FPN grows faster and tends to saturate as well.  Most probably this is the effect of the pixels that saturate.  The FPN at saturation is larger than the PRNU and for that reason the FPN increases.  The FPN tends to saturate, because once all pixels are saturated, the FPN does no longer change,

-       For saturated values of the signal (16 ms < exposure time < 20 ms) the FPN gets a second boost.  It is not completely clear what is happening here (the camera and sensor are “unknown”), but most likely the double sampling of the reference and useful signal start showing some “black sun” or “eclipse” effects.  This results in a larger FPN,

-       For the largest exposure times (exposure time > 16 ms), all pixels are running in the “black sun” or “eclipse” mode, but apparently the sensor is provided with an anti-eclipse circuit which pins the column voltages to a fixed voltages.

The abovementioned explanation is based on a close observation of what the behavior of the output signal.  This is illustrated in Figure 2, showing the same results as the ones mentioned in Figure 1, but with an adapted scale on the vertical axis.

Figure 2 : Same data as shown in Figure 1, but with an adapted scale on the left vertical axis.

As can be noticed, the average output signal tends to reach saturation for an exposure time of (about) 17 ms, but then the average output signal decreases again for a longer exposure time.  From 20 ms onwards, the average output signal seems to be clipped to a particular value, so does the FPN.  A simple explanation for this effect can be the presence of an anti-eclipse circuit.

Anyone else has a better explanation ?

Albert, 06-12-2013.

Forum ADC’s for Imagers is completely SOLD OUT !

December 4th, 2013

The two planned sessions on Dec. 16-17, and Dec. 19-20, 2013 are completely sold out.  There is no need for further regsitration because more seats will not be added.  Thanks to all people who registered.  I will keep you updated about the feedback of the participants.  At that time I will also start with the preparation of a new forum in 2014.

Albert, 4-12-2013.

Status Imaging Forum “ADCs for Imagers”

November 15th, 2013

I just want to give the imaging community a quick update on the registration situation for the forum “ADC’s for Imagers”.

Because the interest was/is much higher than expected, a second session will be organized (this was already announced earlier), and the number of seats is each session is slightly increased (from 24 to 32).

At this moment registration for the forum is still possible, because :

- for the session on Dec. 16 & 17, 2013, there are still 2 seats left,

- for the session on Dec. 19 & 20, 2013, there are still 3 seats left.

If anyone is still interested for registering, take your chance !  Keep in mind that in 2014 the forum will be organized again, but with a different subject !

 

Albert, 15-11-2013.