Imaging Trainings scheduled for Spring 2015

April 8th, 2015

Maybe it is good to remind the visitors of this blog about imaging trainings in the Spring 2015.  There are still 3 different courses in the pipeline :

- a 2-day class to get an introduction in the world of CMOS image sensors.  This class is intended for people who have almost no background in solid-state imaging.  This course takes place in Delft on May 6-7, 2015.  Organization through

- a 5-day class if you want to learn more about imagers than just the working principles.  Also this class is intended for “new-comers” in the field, but also people working already a few years in imaging can revitalize their knowledge.  Key to this class are the exercise sessions at the end of every day helping the participants to put the theory into practice.   This course takes place on May 18-22, 2015 in Barcelona, and is organized by

- a 2-day class with hands-on measurements and evaluation of an “unknown” camera.  Because the participants have to perform all characterization work themselves, this course is NOT intended for people fresh in the imaging field.  Preferably the course participants have a few years of experience in the arena of solid-state imaging.  This course takes place in Munich, on June 2-3, 2015, organized by

Albert, 8 april 2015.



ISSCC2015 (4)

February 27th, 2015

Also this year Shizuoka University was present at the ISSCC with an imager paper.  Mochizuki presented a single-shot 200 Mfps 5×3 Aperture Compressive CMOS Imager.  The chip consists of 5 x 3 subarrays (multi-aperture), and each subarray has 64 x 108 pixels, each of 11.2 um x 5.6 um.  The chip is fabricated in 0.11 um CIS technology.  The 15 sub-arrays all receive the same image information, each sub-array has its own micro-lens.  But the difference between the 15 sub-arrays is the exposure time.  For each sub-array the exposure time is modulated/changed/scrambled in the time domain, such that all the different sub-arrays grab parts of the secenery but all in different and sometimes mixed time slots.  In this way, the information readout is a kind of compressed information in the time domain.  After solving/reconstructing, the 15 images shot at the same time (= NOT the same exposure time !) result in 32 different frames in the time domain.  Thus the sensor has an inherent compression of 47 %.

As many other papers of Shizuoka University, also this paper is relying on a clever pixel design around a PPD, with a lot of knowledge in the device physiscs field.  The paper described very nicely the principle of the compressed sensing, including simulation as well as measurement results.

Albert, 27-02-2015.

ISSCC2015 (3)

February 26th, 2015

Here is another one : a paper of Samsung, presented by dr. Choi.  His paper can be seen as a kind of continuation of his work he did for his PhD at Univ. of Michigan : having a sensor ALWAYS TURNED ON in a kind of hibernation mode (= ultra-low power, low resolution, low quality), but waking up as soon as there is any movement in the scene and switching to a normal mode (= higher resolution, higher quality).  Classical ways to lower power is reducing speed, reducing resolution, reducing number of bits, etc.  But what I appreciated very much in this work were two additional techniques to lower the power :

- using a classical PPD pixel in the normal mode at 3.3 V, and using the same pixel (with TG always switched ON) in a kind of 3T pixel mode operating at 0.9 V (with reduced performance),

- turning the circuitry of two adjacent PGA’s (of 2 adjacent columns in the normal mode) into an 8-bit SAR ADC for the low-power, low quality mode.

In this way the power of the ALWAYS ON mode was reduced by a factor of 500 compared to the normal mode.  Final power consumption was 45.5 uW.

Some more numbers (Numbers add up to Nothing ! Neil Young in “Powderfinger”) : reduced resolution (/4), same fps (30 fps), supply voltage reduced from 3.3 V analog/1.8 V digital to 0.9 V for all, sensitivity down by a factor of 4, FPN went up 20 x (but still less than 1 %)and random noise went up by 4x (expressed in DN, but is 1 DN in the high-quality mode equal to 1 DN in the low-quality mode ???).  But power goes down by 500 times !

Albert, 26-02-2015.

ISSCC2015 (2)

February 26th, 2015

A second paper in the imaging session highlighted the work of NHK in cooperation with Forza Silicon.  A 133 Mpixel (yes, you read it right, one hundred thirty three), 60fps device was described.  The device has on-chip ADC’s, 1 SAR 12-bit ADC for 32 columns.  The ADCs are located at both sides of the device, 242 ADCs at the top and 242 ADCs at the bottom of the chip.  Each SAR ADC has 14 redundant bits, but at the output each pixel is represented with 12 bits.  The pixel size is 2.45 um, 2×1 shared, 2.5T/pixel, 35 full-frame format.  Fabrication was done in 0.18 um 1P4M technology.  Due to its large size, the chip is stitched in one direction.  [There are not that many foundries that allow stitching in a CIS 0.18 um process, so it is easy to guess who fabricated this device.]  At full speed, the device is delivering 1.15Gbps/ch, maybe that does not sounds that much, but the device has 112 channels in parallel.  So in total, this adds up to almost 130 Gbps.

To capture all the information and to get all these bits off the chip, a total power consumption of 11 W is needed.  About 50 % of this power goes to the digital blocks.  All ADCs take 1.67 W.  A few more numbers : conversion gain of 80 uV/e, full well 10005 electrons (don’t forget the last 5 electrons), dark current 50 e/sec @ 40 deg. C, temporal noise 7.68 electrons and dynamic range of 62.3 dB (data measured at 60 fps, gain of 2).

Albert, 26-02-2015.

ISSCC2015 (1)

February 24th, 2015

The imaging session at this year’s ISSCC started with a presentation of A. Suzuki of Sony.  He presented a 20 Mpixel, stacked image sensor for DSC applications.  The stacked device has on the top plane the imaging part, being 2×2 shared pixels of 1.43 um pitch.  Also included on the top layer of silicon are the column electronics.  The end/output of the column circuits are connected to the second layer of silicon by vias.  This is the same concept as presented on last year’s ISSCC.  Half of the column signals is transferred to the second silicon level at the top of the sensor, the other half of the column signals is transferred to the second silicon level at the bottom of the sensor.  The author did not reveal information about via pitch.

New is the DOUBLE single-slope ADC for every column, located on the second layer of silicon.  So every pixel can be converted into the digital domain twice and in parallel, resulting in a double sampling of the data.  If the timing of the ADCs is done right, a gain of 3 dB can be realized (= to the theoretical calculation).  In this configuration of multiple sampling, the resulting noise level is 1.3 electrons for a gain of 27 dB.  But the double column-ADC can also be used in other configurations.  For instance for high-speed applications.  Instead of feeding to the two ADCs the same signal, one can also offer two different signals to the ADCs and in this way increasing the overall speed of the sensor.  This feature can be attractive for slow-motion applications.  Numbers quoted : 120 fps at 16 Mpix resolution (10 bits with on-chip data compression), 240 fps at 4 Mpix resolution (10 bits) and 960 fps at 0.7 Mpix resolution (10 bits).  For still applications, one can use the sensor with 20 Mpix resolution, 12 btis and a frame rate of 30 fps.

The final application of the dual ADC for each column is a combined of video and still capture.  While shooting the video at higher frame rates using the first ADC for each column, one can grab a single still image at full resolution using the second ADC for each column.

Some more numbers : sensor technology 90 nm 1P4M BSI, logic technology 65 nm 1P7M with 1.7 Mlogic gates on the second silicon level.  Number of pixels 5256 (H) x 3934 (V), 1/1.7 inch, full well 9700 electrons, conversion gain 76.6 uV/e, dynamic range 72 dB at 12 bits.

Albert, 24/2/2015.

How to Measure Modulation Transfer Function (10)

January 12th, 2015

Based on one of the comments/questions of the readers of my blog, the MTF of a camera is characterized as a function of the distance between the “target” and the camera.  Again the slanted edge method was used, the light input was a green LED (525 nm), F5.6, Tamron fixed focus lens f = 8 mm.

Figure 1 shows the obtained results.

Figure 1 : MTF as a function of the distance between the test target and the camera (Tamron 8mm lens, 2/3”, F5.6).

As can be seen from the graph, distances of 60 cm and larger, give the best results as well as consistent results.  A distance of 40 cm gives already a lower MTF compared to the larger distances, but for 30 cm and 20 cm the MTF is drastically reduced.

The reason for this fall-off in MTF is twofold :

  • Below 30 cm distance between object and lens, the focusing capability of the lens is limited, and the image is becoming blurry (= less contrast and lower MTF),
  • Between 60 cm and 30 cm the reduction in MTF can be explained by the fact that the incoming rays deviated more and more from the normal, and optical as well as electrical cross-talk will become larger.  As a result the MTF is reduced.


Albert, 12-01-2015.

Merry Christmas and a Happy New Year

December 19th, 2014

Good Bye 2014 ! 

Another year has almost passed.  Time is running fast, extremely fast.  Time again to make a quick look backwards to see what 2014 brought. 

It was a busy year for Harvest Imaging.  Several courses were organized, in-house as well as public courses.  Thanks to CEI, FSRM and Framos who organize the public or open courses.  Thanks to all my customers for the in-house courses.   The in-house trainings brought me again all over the world.  Several courses are scheduled again in 2015.  The open courses are listed on the website of Harvest Imaging.  Besides the trainings, also consulting went pretty well in 2014.  I could keep myself more than busy. 

In 2013 Harvest Imaging started with a forum, being a kind of 2-day class in a field of solid-state imaging, but outside my own expertise.  The first forum was very successful so that it was decided to run another one in 2014.  That just happened last week and earlier this week.  Many positive reactions were received, so the forum will continue in 2015.

It is quite funny to meet people all over the place telling me that they follow my blog.  I am very pleased with these reactions, and sometimes I am really surprised to see how many people visit the website every day.  Thanks very much for your visits and for your feedback.  It is very much appreciated.  Unfortunately there is something that I am missing to put more material on the blog, and that is time.  About two years ago, at the moment that I posted a blog, I had already 3 of 4 extra blogs ready to be published on a later stage.  But at this moment, the pace of blogging is equal to the pace of writing down the material.  In 2014 several blogs were spent on the MTF or Modulation Transfer Function.  I think I never received that many reactions as this time with the MTF.  Apparently it is a subject that “lives” in the community.  A few more blogs about the MTF will follow and then it is over. 

2015 will be a year to have the International Image Sensor Workshop again in Europe.  Being the General (Co-)Chair of the workshop, preparations have already started a while ago  Together with Johannes Solhuvik and Pierre Magnan, we try to make sure that IISW2015 will be another workshop to remember.  Of course much is depending on the quality of the submissions, but Johannes, Pierre and myself will make sure that the setting to present your work will be the most optimum we can obtain.  Besides being the co-chair for the workshop I accepted another task, being the Guest-Editor for the upcoming special Image Sensor issue of IEEE Transactions on Electron Devices.  For both “events”, workshop and special issue, the call for papers is already published.

Recently I was lucky to receive the SEMI Award, but another award, even the real Emmy Award was received for the broadcast camera that was developed around one of the CCDs developed at the time I headed the CCD group at Philips.  That is really a great recognition for the CCD team working at that time on the DPM-CCD.  The concept of a switchable pixel was developed in a close cooperation between the camera developers and the sensor designers, leading to a very nice commercial success.  Congratulations to all people who contributed to this success.

In other words, also 2015 will be busy again.  Nevertheless : Welcome 2015 !  Wishing all my readers a Merry Christmas and a Happy New Year. 

Albert, 19-12-2014.

How to Measure Modulation Transfer Function (9)

November 14th, 2014

Last time the MTF results obtained with green light (525 nm) were highlighted.  This time those results are compared with the MTF measurements done with blue light (470 nm), red light (630 nm) and near-IR light (850 nm).  To compare the results, the measured MTF is shown as a function of the F-number of the lens, and at 3 different spatial frequencies : 0.1, 0.25 and 0.4 times the spatial sampling frequency.  Figures 1, 2, 3 and 4 illustrate the outcome of the measurements, respectively for the blue, green, red and near-IR light.  All results reported come from the same sensor, same camera, same lens (Tamron, 8mm) and same measurement set-up.  All four figures also illustrate the size of the Airy disk as a function of the lens F-number (second vertical axis).

Figure 1 : MTF as a function of lens F-number at 0.1, 0.25 and 0.4 times the sampling frequency, measurements done with blue light input (470 nm).

From figure 1 it can be learned that the optimum setting of the lens F-number is equal to F5.6.  At F16 the Airy disk is equal to 3 times the pixel pitch, and this large size of the blurred spot limits the MTF at the high F-numbers.

Figure 2 : MTF as a function of lens F-number at 0.1, 0.25 and 0.4 times the sampling frequency, measurements done with green light input (525 nm).

Changing the light from blue to green shifts the optimum setting of the lens F-number to F8.  This seems to be in contradiction to what can be expected from the diffraction limits of the lens, because the size of the Airy disk has grown to about 3.5 times the pixel pitch for F16, it would be expected that the optimum setting for the F-number would shift to a lower F-number in comparison to the measurement shown in Figure 1.

Figure 3 : MTF as a function of lens F-number at 0.1, 0.25 and 0.4 times the sampling frequency, measurements done with red light input (630 nm).

The “sweet spot” for the lens setting shifted slightly to have an optimum between F8 and F11, despite of a further growth of the Airy diameter.  So the trend of shifting to larger F-numbers when also the wavelength of the light is increasing, seems to be consistent, even if the diffraction is increasing.

Figure 4 : MTF as a function of lens F-number at 0.1, 0.25 and 0.4 times the sampling frequency, measurements done with near-IR light input (850 nm).

Not visible is the Airy disk diameter at F16, being equal to more than 5.5 times the pixel pitch.  So at the highest F-number, the MTF absolutely will be limited by the diffraction of the lens, but nevertheless the best MTF values are still measured around F11.

As already mentioned in the previous blog, the MTF of the camera system (= lens + sensor + processing) is mainly determined by 3 factors :

  • Diffraction of the lens, being a strong function of the F-number and which is worst for F16 and best for F1.4,
  • Aberrations of the lens, being worst for the lowest F-numbers,
  • Cross talk in the pixels (optical as well as electrical), expected to be worst for the lowest F-numbers as well, because for these settings of the lens, the angle under which the rays hit the sensor deviate the most from the normal.

The major observation of shifting the optimum lens F-number (for MTF performance) to larger F-values, despite of the increasing wavelengths, indicates that cross-talk contribution to the deterioration of the MTF is a major factor for lower F-numbers.  The MTF drop is remarkable in the case the F-number is kept constant but the wavelength of the light source is changed.  So the wavelength of the light has a major influence, the latter is due to the lower absorption coefficient of the silicon for these longer wavelengths.

As a general conclusion : the MTF of the device-under-test is seriously suffering from cross-talk between the pixels, in particular when the light is coming under angles that deviate more from the normal (= low F-numbers) and/or when the light can penetrate deeper into the silicon (= longer wavelengths).

Albert, 14-11-2014.

How to Measure Modulation Transfer Function (8)

October 28th, 2014

Now that we know how to apply the slanted edge method, it is time to play around with it and gather some interesting results.

The camera under test is provided with a special lens that has a variable iris setting with a click system, so that it is becoming easy to define and change the F-number of the lens during the measurements. Using a green LED light source with a wavelength of 525 nm, a set of MTF measurements is made at various settings of the F-number. The result is shown in Figure 1.

Figure 1 : MTF measurement for green light, as a function of the F-number of the lens.

While changing the F-number, the exposure time setting of the camera is optimized so that a constant output signal is obtained. For every change of one F-stop, the exposure time is adapted by a factor of 2. As can be seen from Figure 1, starting with a low F-number and moving towards higher values, the MTF increases from F1.4 till F8 and then starts decreasing again till F16. This effect of increase and decrease can be explained by the interaction of three effects :

- Most lens aberrations are becoming worse for lower F-numbers. So changing the lens setting from a low F-number to a larger F-number will decrease the lens aberrations and will increase the sharpness of the image projected on the sensor. Consequently, the MTF will increase,

- Low F-numbers result in larger angles under which the rays are hitting the sensor. If the angle of the incoming rays deviates more from the normal, the chance of generating optical and electrical cross talk is becoming larger. So higher F-numbers result in less cross-talk and better MTF,

- Even with a perfect lens, a point at the object plane will result in a disk at the image plane. This so-called Airy disk has a diameter equal to 2.44xlxF, in which l represents the wavelength of the incoming light. Taking into account a pixel size of 6 um and a wavelength of 0.525 um, the size of the Airy disk is becoming equal to pixel pitch for F4.7, the size of the Airy disk is becoming equal to 2 times the pixel pitch for F=9.4, the size of the Airy disk is becoming equal to 3 times the pixel pitch for F14.1. So if the F-number is becoming larger, the spot size is becoming larger as well and the image at the sensor level is becoming more blurred. This effect is making the MTF lower, which can be observed for a F8 and higher.

The dependency on the F-number is better observable in Figure 2, which illustrates the MTF at 3 different values of the spatial frequency (10 %, 25 % and 40 % of the sampling frequency).

Figure 2 : MTF as a function of the F-number for 3 different spatial frequencies : 10 %, 25 % and 40 % of the sampling frequency, and Airy disk size as a function of F-number.

Also the size of the Airy disk as a function of the lens F-number is shown, referring to the right vertical axis. As can be seen, for green light F8 seems to be the optimum setting of the lens as far as MTF of the camera system is concerned.   More on this topic next time !

Albert, 28-10-2014.

How to Measure Modulation Transfer Function (7)

October 6th, 2014

In the meantime there should be enough explanation about the slanted edge method in this blog, it is time now to do the measurements.  For those of you who want to get quick results with the slanted edge method, you can follow the rules/guidelines/advices of the ISO-12233 standard, buy the testchart, install the software, and get going.  Of course, it is much more fun to develop the whole stuff yourself, and by the way, that is the proper way to learn all the details about the slanted edge method for the MTF characterization.

So now the first measurement results of the slanted edge method will be highlighted.  The various steps to come to the appropriate results go as follows :

  • Focus an object/target with a slanted edge  (preferably between 2o and 10o) on the imager,
  • Grab 50 images of the slanted edge object and average these images to lower the temporal noise,
  • Grab 50 images of a uniform target (without changing the camera settings, without changing the light source) and average these images to lower the temporal noise,
  • Grab 50 images in dark (without changing the camera settings) and average these images to lower the temporal noise,
  • Correct the slanted image data for non-uniformities in dark and for non-uniformities in pixel response and/or non-uniformities of the light source,
  • Select a smaller window of the image in which the slanted edge is present,
  • Calculate the slope of the slanted edge w.r.t. the vertical column or horizontal row direction of the imager, the slope of the slanted edge is needed to calibrate/normalize the horizontal spatial frequency axis of the MTF curve,
  • Record the spatial frequency response (SFR) in 4 adjacent columns, merge the 4 SFR’s and perform a further data interpolation, to get equidistant data point in the spatial domain,
  • Calculate the line-spread function (LSF) based on the 4-times oversampled SFR,
  • Force the LSF through a fast-Fourier transform, resulting in the optical transfer function and calculate the magnitude of the latter to obtain the modulation transfer function,
  • Normalize both axes to get the classical MTF curve.

Figure 1 shows some intermediate steps of the MTF characterization.  The left part of figure 1 illustrates the observed slanted edge, in a window of 100H x 350V pixels.  The slanted edge is created by means of the edge of a MacBeth chart in front of a white sheet of paper.  The middle part of figure 1 shows a randomly chosen horizontal line (in red) along which the dark-light transition is checked.  Through this dark-light transition the pixels values along a vertical column (in green) are used for the SFR measurement.  Finally, the right part of figure 1 shows the calculated slanted edge (in blue).

Figure 1 : Three illustrations of the observed slanted edge : left the raw data coming off the sensor, middle : row and column along which the SFR is measured, right : the slanted edge as calculated by the software tool.

Figure 2 shows the SFR and LSF based on the pixels values shown in figure 1 and after 4 times oversampling of the data (= using the data of 4 adjacent columns).

Figure 2 : Spatial Frequency Response after 4x oversampling.

As can be seen by means of the SFR on figure 2, the transition from dark (left side) to light (right side) is relatively steep, but this steepness is much less visible in the LSF in figure 2.  The latter has to do with the 4x oversampling, which reduces the difference between two neighbouring measurement points.   Also remarkable is the relative large variation in the white background used in the measurement.  But these variations show up at a relative high frequency and will not influence the MTF measurement.

The final MTF result is show in Figure 3.


Figure 3 : MTF obtained by the slanted edge method.

The measurment with white light (R=G=B) results in an MTF of 0.25 or 25 % at Nyquist frequency.  For a pixel with a large fill factor  (the exact value is not known, but the pixel pitch is large and with micro-lenses, a large fill factor can be expected) this number of 0.25 or 25 % is relatively low.  It should be noted that not just the sensor MTF is measured, but the measurement does include the camera lens as well !

Albert, 06-10-2014.