June 23rd, 2015
The fight on stacking has began. After Sony’s presentation at ISSCC, others are following on the stacked road. Omnivision shows their architecture on stacking with the TSV’s outside the imaging array. They claim to have the technology ready to start production of stacked imagers with a pixel pitch of 1 um. Olympus showed their improved work over the one presented two years ago at ISSCC. Olympus has a contact between the two silicon layers for every group of 2×2 pixels. They created a 16M pixel device with 4M direct contacts, each with 7.6 um pitch. Extra added to the ISSCC paper is the CDS capability buried in the second layer of silicon. Also remarkable : all circuitry on the top level silicon is p-type ! Because a metal light shield is used between the two layers of silicon, a PLS of -180 dB is obtained. Giant steps forward in their stacked wafer-to-wafer imager process.
Like Olympys, also NHK showed a wafer-to-wafer bonding using Au contacts. Nice to get also some information about the technology of the bonding itself. TSMC had a paper about the photon emission in a stacked CIS. Of course the second layer with the processing circuitry in a stacked image sensor is not designed/optimized for imaging purposes, and consequently during operation the circuitry present in this layer can generate some light that can be captured by the top layer. This is no longer PLS but SLP, because the light is coming in the opposite direction. Also the last paper in this session came from TSMC and dealt with dark FPN improvement by a stacked CIS process. Focus was put on the decomposition of the FPN by biasing/switching the TG in an appropriate way.
June 22nd, 2015
The first session of IISW2015 was devoted to larger devices intended for digital still photogrpahy. Samsung presented a 28M APS-C sensor with BSI technology. It is not common to go after BSI for these large dies, but apparently time and yield is ready to apply BSI to these larger devices as well. Remarkable dark performance : 9 electrons/s dark current at 60 deg.C, 1.8 electron of random noise at 24 dB gain. The architecture is charactereized by 1 ADC for 2 columns, double column busses and an optimized read sequence to allow for binning.
Canon described their sensor with phase-detection auto-focus pixels in EVERY pixel. This solution allows for no light shield in the auto-focus pixels and no interpolation of the auto-focus pixels. This sensor is already avaialable in Canon cameras, but it is the first time Canon publishes technical information about the device. Because of the dual photodiode in every pixel, every pixel is provided with two readout structures, so every pixel has 8 transistors. Random noise level of 1.8 electrons is reported at gain = 32 for a single photodiode.
Also Sony presented a CMOS imager with auto-focus functionality in every pixel. This sensor is provided with a diagonal pixel orientation, so that the rows have alternatively G pixels and R/B pixels. To make this sensor compatible with the installed software base, first of all the pixel stream is converted into Bayer RGB. Also of interest for this sensor architecture with a dual PD in every pixel is the option for HDR by using 1 PD/pixel for a short exposure and 1 PD/pixel for a long exposure time.
Teledyne DALSA published one of the very few CCD papers at the workshop, mainly large area devices, e.g. 32M, 60M and 250M. Remarkable is the ultra low dark current for these devices : 2 pA/cm2 at 60 deg.C. These low values make these devices very well suited for extremely large expsoure times.
Albert, 22 juni 2015.
June 19th, 2015
Some interesting trends in image sensor technology for consumer applications are :
- the incorporation of deep trench isolation (DTI) between the pixels to lower the optical and electrical cross talk. Announced were already DTI defined from the front side (ST) as well as the back side (Samsung), but new at the workshop are the DTIs that do not completely go through the thinned BSI silicon,
- building “walls” between the colour filters (called “buried CFA”) is finding its way to production. These walls limit optical and spectral cross-talk,
- very thin optical stacks, down to 1.5 um for BSI sensors,
- incoroporation of focus pixels for auto-focusing purposes. These focus pixels can be incorporated in a regular pattern, but sometimes a semi-random pattern is used as well. Moreover, the focus pixels do not need to be all of the same size,
- stacked imager are more and more introduced. During IISW 2009, the buss word was BSI, now it is “stacked”. Stacked imagers are going to solve all the problems ….
- the incorporation of W pixels continues, in more recent devices, up to 50 % of the pixels are W pixels.
In conclusion : no major new technologies were introduced, neither any pixel size below 1 um, but everything is getting better in performance and more compact in size.
June 13th, 2015
As already announced earlier, also in 2015 there will be a Harvest Imaging Forum. All forum information is now on-line, including agenda and registration form. More info can be found at www.harvestimaging.com/forum.php
May 5th, 2015
After a very successful forums in 2013 and 2014, a third one will be organized in December, 2015, in Voorburg (the Hague), the Netherlands. The basic intention of the forum is to have a scientific and technical in-depth discussion on one particular imaging topic. The audience will be strictly limited to enhance and stimulate the interaction with the speaker(s) as well as to allow close contacts between the participants.
The subject of the third forum will be :
“3D Imaging with Time-of-Flight :
Solid-State Devices, Circuits and Architectures”.
A world-level expert in the field,
dr. David STOPPA,
is invited and agreed to address and explain the ins and out of this important topic.
The agenda of the forum will be published soon, registration for the forum will start after the IISW2015.
April 8th, 2015
Maybe it is good to remind the visitors of this blog about imaging trainings in the Spring 2015. There are still 3 different courses in the pipeline :
- a 2-day class to get an introduction in the world of CMOS image sensors. This class is intended for people who have almost no background in solid-state imaging. This course takes place in Delft on May 6-7, 2015. Organization through www.fsrm.ch.
- a 5-day class if you want to learn more about imagers than just the working principles. Also this class is intended for “new-comers” in the field, but also people working already a few years in imaging can revitalize their knowledge. Key to this class are the exercise sessions at the end of every day helping the participants to put the theory into practice. This course takes place on May 18-22, 2015 in Barcelona, and is organized by www.cei.se.
- a 2-day class with hands-on measurements and evaluation of an “unknown” camera. Because the participants have to perform all characterization work themselves, this course is NOT intended for people fresh in the imaging field. Preferably the course participants have a few years of experience in the arena of solid-state imaging. This course takes place in Munich, on June 2-3, 2015, organized by www.framos.com.
Albert, 8 april 2015.
February 27th, 2015
Also this year Shizuoka University was present at the ISSCC with an imager paper. Mochizuki presented a single-shot 200 Mfps 5×3 Aperture Compressive CMOS Imager. The chip consists of 5 x 3 subarrays (multi-aperture), and each subarray has 64 x 108 pixels, each of 11.2 um x 5.6 um. The chip is fabricated in 0.11 um CIS technology. The 15 sub-arrays all receive the same image information, each sub-array has its own micro-lens. But the difference between the 15 sub-arrays is the exposure time. For each sub-array the exposure time is modulated/changed/scrambled in the time domain, such that all the different sub-arrays grab parts of the secenery but all in different and sometimes mixed time slots. In this way, the information readout is a kind of compressed information in the time domain. After solving/reconstructing, the 15 images shot at the same time (= NOT the same exposure time !) result in 32 different frames in the time domain. Thus the sensor has an inherent compression of 47 %.
As many other papers of Shizuoka University, also this paper is relying on a clever pixel design around a PPD, with a lot of knowledge in the device physiscs field. The paper described very nicely the principle of the compressed sensing, including simulation as well as measurement results.
February 26th, 2015
Here is another one : a paper of Samsung, presented by dr. Choi. His paper can be seen as a kind of continuation of his work he did for his PhD at Univ. of Michigan : having a sensor ALWAYS TURNED ON in a kind of hibernation mode (= ultra-low power, low resolution, low quality), but waking up as soon as there is any movement in the scene and switching to a normal mode (= higher resolution, higher quality). Classical ways to lower power is reducing speed, reducing resolution, reducing number of bits, etc. But what I appreciated very much in this work were two additional techniques to lower the power :
- using a classical PPD pixel in the normal mode at 3.3 V, and using the same pixel (with TG always switched ON) in a kind of 3T pixel mode operating at 0.9 V (with reduced performance),
- turning the circuitry of two adjacent PGA’s (of 2 adjacent columns in the normal mode) into an 8-bit SAR ADC for the low-power, low quality mode.
In this way the power of the ALWAYS ON mode was reduced by a factor of 500 compared to the normal mode. Final power consumption was 45.5 uW.
Some more numbers (Numbers add up to Nothing ! Neil Young in “Powderfinger”) : reduced resolution (/4), same fps (30 fps), supply voltage reduced from 3.3 V analog/1.8 V digital to 0.9 V for all, sensitivity down by a factor of 4, FPN went up 20 x (but still less than 1 %)and random noise went up by 4x (expressed in DN, but is 1 DN in the high-quality mode equal to 1 DN in the low-quality mode ???). But power goes down by 500 times !
February 26th, 2015
A second paper in the imaging session highlighted the work of NHK in cooperation with Forza Silicon. A 133 Mpixel (yes, you read it right, one hundred thirty three), 60fps device was described. The device has on-chip ADC’s, 1 SAR 12-bit ADC for 32 columns. The ADCs are located at both sides of the device, 242 ADCs at the top and 242 ADCs at the bottom of the chip. Each SAR ADC has 14 redundant bits, but at the output each pixel is represented with 12 bits. The pixel size is 2.45 um, 2×1 shared, 2.5T/pixel, 35 full-frame format. Fabrication was done in 0.18 um 1P4M technology. Due to its large size, the chip is stitched in one direction. [There are not that many foundries that allow stitching in a CIS 0.18 um process, so it is easy to guess who fabricated this device.] At full speed, the device is delivering 1.15Gbps/ch, maybe that does not sounds that much, but the device has 112 channels in parallel. So in total, this adds up to almost 130 Gbps.
To capture all the information and to get all these bits off the chip, a total power consumption of 11 W is needed. About 50 % of this power goes to the digital blocks. All ADCs take 1.67 W. A few more numbers : conversion gain of 80 uV/e, full well 10005 electrons (don’t forget the last 5 electrons), dark current 50 e/sec @ 40 deg. C, temporal noise 7.68 electrons and dynamic range of 62.3 dB (data measured at 60 fps, gain of 2).