Third HARVEST IMAGING FORUM in December 2015

May 5th, 2015

After a very successful forums in 2013 and 2014, a third one will be organized in December, 2015, in Voorburg (the Hague), the Netherlands.  The basic intention of the forum is to have a scientific and technical in-depth discussion on one particular imaging topic.  The audience will be strictly limited to enhance and stimulate the interaction with the speaker(s) as well as to allow close contacts between the participants.

The subject of the third forum will be :

“3D Imaging with Time-of-Flight :

Solid-State Devices, Circuits and Architectures”.

 A world-level expert in the field,

dr. David STOPPA,

is invited and agreed to address and explain the ins and out of this important topic.

The agenda of the forum will be published soon, registration for the forum will start after the IISW2015.

 

Albert, 5/5/2015.

Announcement of the third HARVEST IMAGING FORUM in December 2015

April 17th, 2015

Mark now already your agenda for the third harvest imaging forum (or solid-state imaging forum as we called it earlier), scheduled for Dec. 10-11, 2015.

After the succesful gatherings in 2013 and 2014, I am happy to announce a third one.  Also this third Harvest Imaging forum will be a high-level, technical, short course focusing on one particular hot topic in the field of solid-state imaging.  The audience will be strictly limited, just to stimulate as much as possible the interaction between the participants and speaker(s).  The subject of the third forum will be : “3D Time-of-Flight Imaging”.

More information about the speaker and the agenda of the third forum will follow in the coming weeks, but I wanted to share this announcement with you as early as possible to make sure you can keep your agenda free on these days.

Albert,

April 17th, 2015.

Imaging Trainings scheduled for Spring 2015

April 8th, 2015

Maybe it is good to remind the visitors of this blog about imaging trainings in the Spring 2015.  There are still 3 different courses in the pipeline :

- a 2-day class to get an introduction in the world of CMOS image sensors.  This class is intended for people who have almost no background in solid-state imaging.  This course takes place in Delft on May 6-7, 2015.  Organization through www.fsrm.ch.

- a 5-day class if you want to learn more about imagers than just the working principles.  Also this class is intended for “new-comers” in the field, but also people working already a few years in imaging can revitalize their knowledge.  Key to this class are the exercise sessions at the end of every day helping the participants to put the theory into practice.   This course takes place on May 18-22, 2015 in Barcelona, and is organized by www.cei.se.

- a 2-day class with hands-on measurements and evaluation of an “unknown” camera.  Because the participants have to perform all characterization work themselves, this course is NOT intended for people fresh in the imaging field.  Preferably the course participants have a few years of experience in the arena of solid-state imaging.  This course takes place in Munich, on June 2-3, 2015, organized by www.framos.com.

Albert, 8 april 2015.

 

 

ISSCC2015 (4)

February 27th, 2015

Also this year Shizuoka University was present at the ISSCC with an imager paper.  Mochizuki presented a single-shot 200 Mfps 5×3 Aperture Compressive CMOS Imager.  The chip consists of 5 x 3 subarrays (multi-aperture), and each subarray has 64 x 108 pixels, each of 11.2 um x 5.6 um.  The chip is fabricated in 0.11 um CIS technology.  The 15 sub-arrays all receive the same image information, each sub-array has its own micro-lens.  But the difference between the 15 sub-arrays is the exposure time.  For each sub-array the exposure time is modulated/changed/scrambled in the time domain, such that all the different sub-arrays grab parts of the secenery but all in different and sometimes mixed time slots.  In this way, the information readout is a kind of compressed information in the time domain.  After solving/reconstructing, the 15 images shot at the same time (= NOT the same exposure time !) result in 32 different frames in the time domain.  Thus the sensor has an inherent compression of 47 %.

As many other papers of Shizuoka University, also this paper is relying on a clever pixel design around a PPD, with a lot of knowledge in the device physiscs field.  The paper described very nicely the principle of the compressed sensing, including simulation as well as measurement results.

Albert, 27-02-2015.

ISSCC2015 (3)

February 26th, 2015

Here is another one : a paper of Samsung, presented by dr. Choi.  His paper can be seen as a kind of continuation of his work he did for his PhD at Univ. of Michigan : having a sensor ALWAYS TURNED ON in a kind of hibernation mode (= ultra-low power, low resolution, low quality), but waking up as soon as there is any movement in the scene and switching to a normal mode (= higher resolution, higher quality).  Classical ways to lower power is reducing speed, reducing resolution, reducing number of bits, etc.  But what I appreciated very much in this work were two additional techniques to lower the power :

- using a classical PPD pixel in the normal mode at 3.3 V, and using the same pixel (with TG always switched ON) in a kind of 3T pixel mode operating at 0.9 V (with reduced performance),

- turning the circuitry of two adjacent PGA’s (of 2 adjacent columns in the normal mode) into an 8-bit SAR ADC for the low-power, low quality mode.

In this way the power of the ALWAYS ON mode was reduced by a factor of 500 compared to the normal mode.  Final power consumption was 45.5 uW.

Some more numbers (Numbers add up to Nothing ! Neil Young in “Powderfinger”) : reduced resolution (/4), same fps (30 fps), supply voltage reduced from 3.3 V analog/1.8 V digital to 0.9 V for all, sensitivity down by a factor of 4, FPN went up 20 x (but still less than 1 %)and random noise went up by 4x (expressed in DN, but is 1 DN in the high-quality mode equal to 1 DN in the low-quality mode ???).  But power goes down by 500 times !

Albert, 26-02-2015.

ISSCC2015 (2)

February 26th, 2015

A second paper in the imaging session highlighted the work of NHK in cooperation with Forza Silicon.  A 133 Mpixel (yes, you read it right, one hundred thirty three), 60fps device was described.  The device has on-chip ADC’s, 1 SAR 12-bit ADC for 32 columns.  The ADCs are located at both sides of the device, 242 ADCs at the top and 242 ADCs at the bottom of the chip.  Each SAR ADC has 14 redundant bits, but at the output each pixel is represented with 12 bits.  The pixel size is 2.45 um, 2×1 shared, 2.5T/pixel, 35 full-frame format.  Fabrication was done in 0.18 um 1P4M technology.  Due to its large size, the chip is stitched in one direction.  [There are not that many foundries that allow stitching in a CIS 0.18 um process, so it is easy to guess who fabricated this device.]  At full speed, the device is delivering 1.15Gbps/ch, maybe that does not sounds that much, but the device has 112 channels in parallel.  So in total, this adds up to almost 130 Gbps.

To capture all the information and to get all these bits off the chip, a total power consumption of 11 W is needed.  About 50 % of this power goes to the digital blocks.  All ADCs take 1.67 W.  A few more numbers : conversion gain of 80 uV/e, full well 10005 electrons (don’t forget the last 5 electrons), dark current 50 e/sec @ 40 deg. C, temporal noise 7.68 electrons and dynamic range of 62.3 dB (data measured at 60 fps, gain of 2).

Albert, 26-02-2015.

ISSCC2015 (1)

February 24th, 2015

The imaging session at this year’s ISSCC started with a presentation of A. Suzuki of Sony.  He presented a 20 Mpixel, stacked image sensor for DSC applications.  The stacked device has on the top plane the imaging part, being 2×2 shared pixels of 1.43 um pitch.  Also included on the top layer of silicon are the column electronics.  The end/output of the column circuits are connected to the second layer of silicon by vias.  This is the same concept as presented on last year’s ISSCC.  Half of the column signals is transferred to the second silicon level at the top of the sensor, the other half of the column signals is transferred to the second silicon level at the bottom of the sensor.  The author did not reveal information about via pitch.

New is the DOUBLE single-slope ADC for every column, located on the second layer of silicon.  So every pixel can be converted into the digital domain twice and in parallel, resulting in a double sampling of the data.  If the timing of the ADCs is done right, a gain of 3 dB can be realized (= to the theoretical calculation).  In this configuration of multiple sampling, the resulting noise level is 1.3 electrons for a gain of 27 dB.  But the double column-ADC can also be used in other configurations.  For instance for high-speed applications.  Instead of feeding to the two ADCs the same signal, one can also offer two different signals to the ADCs and in this way increasing the overall speed of the sensor.  This feature can be attractive for slow-motion applications.  Numbers quoted : 120 fps at 16 Mpix resolution (10 bits with on-chip data compression), 240 fps at 4 Mpix resolution (10 bits) and 960 fps at 0.7 Mpix resolution (10 bits).  For still applications, one can use the sensor with 20 Mpix resolution, 12 btis and a frame rate of 30 fps.

The final application of the dual ADC for each column is a combined of video and still capture.  While shooting the video at higher frame rates using the first ADC for each column, one can grab a single still image at full resolution using the second ADC for each column.

Some more numbers : sensor technology 90 nm 1P4M BSI, logic technology 65 nm 1P7M with 1.7 Mlogic gates on the second silicon level.  Number of pixels 5256 (H) x 3934 (V), 1/1.7 inch, full well 9700 electrons, conversion gain 76.6 uV/e, dynamic range 72 dB at 12 bits.

Albert, 24/2/2015.

How to Measure Modulation Transfer Function (10)

January 12th, 2015

Based on one of the comments/questions of the readers of my blog, the MTF of a camera is characterized as a function of the distance between the “target” and the camera.  Again the slanted edge method was used, the light input was a green LED (525 nm), F5.6, Tamron fixed focus lens f = 8 mm.

Figure 1 shows the obtained results.

Figure 1 : MTF as a function of the distance between the test target and the camera (Tamron 8mm lens, 2/3”, F5.6).

As can be seen from the graph, distances of 60 cm and larger, give the best results as well as consistent results.  A distance of 40 cm gives already a lower MTF compared to the larger distances, but for 30 cm and 20 cm the MTF is drastically reduced.

The reason for this fall-off in MTF is twofold :

  • Below 30 cm distance between object and lens, the focusing capability of the lens is limited, and the image is becoming blurry (= less contrast and lower MTF),
  • Between 60 cm and 30 cm the reduction in MTF can be explained by the fact that the incoming rays deviated more and more from the normal, and optical as well as electrical cross-talk will become larger.  As a result the MTF is reduced.

 

Albert, 12-01-2015.

Merry Christmas and a Happy New Year

December 19th, 2014

Good Bye 2014 ! 

Another year has almost passed.  Time is running fast, extremely fast.  Time again to make a quick look backwards to see what 2014 brought. 

It was a busy year for Harvest Imaging.  Several courses were organized, in-house as well as public courses.  Thanks to CEI, FSRM and Framos who organize the public or open courses.  Thanks to all my customers for the in-house courses.   The in-house trainings brought me again all over the world.  Several courses are scheduled again in 2015.  The open courses are listed on the website of Harvest Imaging.  Besides the trainings, also consulting went pretty well in 2014.  I could keep myself more than busy. 

In 2013 Harvest Imaging started with a forum, being a kind of 2-day class in a field of solid-state imaging, but outside my own expertise.  The first forum was very successful so that it was decided to run another one in 2014.  That just happened last week and earlier this week.  Many positive reactions were received, so the forum will continue in 2015.

It is quite funny to meet people all over the place telling me that they follow my blog.  I am very pleased with these reactions, and sometimes I am really surprised to see how many people visit the website every day.  Thanks very much for your visits and for your feedback.  It is very much appreciated.  Unfortunately there is something that I am missing to put more material on the blog, and that is time.  About two years ago, at the moment that I posted a blog, I had already 3 of 4 extra blogs ready to be published on a later stage.  But at this moment, the pace of blogging is equal to the pace of writing down the material.  In 2014 several blogs were spent on the MTF or Modulation Transfer Function.  I think I never received that many reactions as this time with the MTF.  Apparently it is a subject that “lives” in the community.  A few more blogs about the MTF will follow and then it is over. 

2015 will be a year to have the International Image Sensor Workshop again in Europe.  Being the General (Co-)Chair of the workshop, preparations have already started a while ago  Together with Johannes Solhuvik and Pierre Magnan, we try to make sure that IISW2015 will be another workshop to remember.  Of course much is depending on the quality of the submissions, but Johannes, Pierre and myself will make sure that the setting to present your work will be the most optimum we can obtain.  Besides being the co-chair for the workshop I accepted another task, being the Guest-Editor for the upcoming special Image Sensor issue of IEEE Transactions on Electron Devices.  For both “events”, workshop and special issue, the call for papers is already published.

Recently I was lucky to receive the SEMI Award, but another award, even the real Emmy Award was received for the broadcast camera that was developed around one of the CCDs developed at the time I headed the CCD group at Philips.  That is really a great recognition for the CCD team working at that time on the DPM-CCD.  The concept of a switchable pixel was developed in a close cooperation between the camera developers and the sensor designers, leading to a very nice commercial success.  Congratulations to all people who contributed to this success.

In other words, also 2015 will be busy again.  Nevertheless : Welcome 2015 !  Wishing all my readers a Merry Christmas and a Happy New Year. 

Albert, 19-12-2014.

How to Measure Modulation Transfer Function (9)

November 14th, 2014

Last time the MTF results obtained with green light (525 nm) were highlighted.  This time those results are compared with the MTF measurements done with blue light (470 nm), red light (630 nm) and near-IR light (850 nm).  To compare the results, the measured MTF is shown as a function of the F-number of the lens, and at 3 different spatial frequencies : 0.1, 0.25 and 0.4 times the spatial sampling frequency.  Figures 1, 2, 3 and 4 illustrate the outcome of the measurements, respectively for the blue, green, red and near-IR light.  All results reported come from the same sensor, same camera, same lens (Tamron, 8mm) and same measurement set-up.  All four figures also illustrate the size of the Airy disk as a function of the lens F-number (second vertical axis).

Figure 1 : MTF as a function of lens F-number at 0.1, 0.25 and 0.4 times the sampling frequency, measurements done with blue light input (470 nm).

From figure 1 it can be learned that the optimum setting of the lens F-number is equal to F5.6.  At F16 the Airy disk is equal to 3 times the pixel pitch, and this large size of the blurred spot limits the MTF at the high F-numbers.

Figure 2 : MTF as a function of lens F-number at 0.1, 0.25 and 0.4 times the sampling frequency, measurements done with green light input (525 nm).

Changing the light from blue to green shifts the optimum setting of the lens F-number to F8.  This seems to be in contradiction to what can be expected from the diffraction limits of the lens, because the size of the Airy disk has grown to about 3.5 times the pixel pitch for F16, it would be expected that the optimum setting for the F-number would shift to a lower F-number in comparison to the measurement shown in Figure 1.

Figure 3 : MTF as a function of lens F-number at 0.1, 0.25 and 0.4 times the sampling frequency, measurements done with red light input (630 nm).

The “sweet spot” for the lens setting shifted slightly to have an optimum between F8 and F11, despite of a further growth of the Airy diameter.  So the trend of shifting to larger F-numbers when also the wavelength of the light is increasing, seems to be consistent, even if the diffraction is increasing.

Figure 4 : MTF as a function of lens F-number at 0.1, 0.25 and 0.4 times the sampling frequency, measurements done with near-IR light input (850 nm).

Not visible is the Airy disk diameter at F16, being equal to more than 5.5 times the pixel pitch.  So at the highest F-number, the MTF absolutely will be limited by the diffraction of the lens, but nevertheless the best MTF values are still measured around F11.

As already mentioned in the previous blog, the MTF of the camera system (= lens + sensor + processing) is mainly determined by 3 factors :

  • Diffraction of the lens, being a strong function of the F-number and which is worst for F16 and best for F1.4,
  • Aberrations of the lens, being worst for the lowest F-numbers,
  • Cross talk in the pixels (optical as well as electrical), expected to be worst for the lowest F-numbers as well, because for these settings of the lens, the angle under which the rays hit the sensor deviate the most from the normal.

The major observation of shifting the optimum lens F-number (for MTF performance) to larger F-values, despite of the increasing wavelengths, indicates that cross-talk contribution to the deterioration of the MTF is a major factor for lower F-numbers.  The MTF drop is remarkable in the case the F-number is kept constant but the wavelength of the light source is changed.  So the wavelength of the light has a major influence, the latter is due to the lower absorption coefficient of the silicon for these longer wavelengths.

As a general conclusion : the MTF of the device-under-test is seriously suffering from cross-talk between the pixels, in particular when the light is coming under angles that deviate more from the normal (= low F-numbers) and/or when the light can penetrate deeper into the silicon (= longer wavelengths).

Albert, 14-11-2014.