Archive for February, 2013

International Solid-State Circuits Conference 2013 (4)

Monday, February 25th, 2013

“A 3D vision 2.1 Mpixels image sensor for single-lens camera systems”, by S.Koyama of Panasonic.  The basic idea is to perform depth sensing by means of a “standard” 2D image sensor.  To do so, every pair of horizontal pixels is provided with one lenticular lens (cylindric).  This is resulting in the structure that one pixel of the pair “looks” to the left incoming beams (= left eye), and the other pixel of the pair “looks” to the right incoming beams (= right eye).  Based on the difference in information, the depth can be measured.  Simple idea, but a few important items need to reported :

–       The standard Bayer pattern is no longer applicable, because the two paired pixels need to have the same color.  So the CFA is a kind of Bayer pattern that is stretched in horizontal direction across two pixels,

–       The real difficult part of this concept is located in the digital micro-lenses that are located on every individual pixel and that are located between the silicon and the lenticular lens.  These digital micro-lenses are/were described elsewhere, but it looks that they are key to this idea, especially for the pixels that are situated towards the edges of the image sensor,

–       The method works only with low F numbers for the main lens (e.g. F1.4).

Measurement results show that the “left eye” and the “right eye” really can discriminate between various angles of incidence.  Their peak sensitivity is 2 times larger than a classical pixel, basically showing that 3D is working very efficiently (2 times is the best you can get because you bring all information from 2 pixels into a single pixel).

“A 187.5 uVrms read noise 51 mW 1.4 Mpixel CMOS image sensor with PMOSCAP column CDS and 10b self-differential offset-cancelled pipeline SAR-ADC”, by J. Deguchi (Toshiba).  By using pMOS capacitors in the columns to perform CDS, 50 % of the area can be saved because of the high intrinsic capacitance value of the pMOS.  The capacitors were not only applied in the CDS circuitry but also elsewhere in the controller, resulting in a lower area and a power reduction of 40 %.  A similar story for the CDAC in the ADC : a size of 50 % and a power of only 20 % compared to previously reported devices.  Besides all this good news, a noise level of 4.5 electrons was shown (take into account that the conversion gain is “only” 41.8 uV/electron).

Albert, 24-01-2013.

 

International Solid-State Circuits Conference 2013 (3)

Saturday, February 23rd, 2013

Paper presented by L. Braga (FBK, Trento) : “An 8×16 pixel 92kSPAD time-resolved sensor with on-pixel 64 ps 12b TDC and 100MS/s real-time energy histogramming in 0.13 um CIS technology for PET/MRI applications”. After the introduction about PET and various sensor options for PET, the author gave details about his own sensor architecture, called a mini Si-PM. One of the known limitations of SPADs is their small fill factor, but apparently by making the fill factor larger, also the dark-count rate and the yield are worsen more-than-linearly with the area. So instead of using one large SPAD, many smaller SPADs are arranged in a parallel structure. All these smaller SPADs are combined in a large OR-tree, but to avoid too much overlap of the dead times of all the SPADs, several monostables are incorporated in the OR-tree. This results in a spatial as well as temporal compression with 300 ps pulses every time one of the SPADs in the OR-tree is fired. One level higher in hierarchy, the 8×16 pixels are connected in a large H-tree to avoid differences in delays. The overall chip contains 92k SPADs (8 x 16 pixels each having 720 SPADs with 42.6 % fill factor, all connected to one single active time-to-digital converter).

During the presentation several measurements were shown to illustrate the working of this large SPAD chip.

Paper presented by C. Niclass (Toyota) : “A 0.18 um CMOS SoC for a 100m range, 10 fps 200×96 pixel Time of Flight depth sensor”. This chip has a novel idea to discriminate the ToF signal from the background by spatiotemporal correlation of photons. The idea is based on recording the time of arrival of each photon (background + ToF signal) and by making a kind of histogram in the time domain of these arrival times, the ToF signal can be discriminated from the background. The sensor is only 96 pixel in height, and to extend the vertical resolution, a scanning method with a rotating polygon mirror with 6 facets is used. The sensor itself contains a “row” of ToF detectors, as well as a “row” of standard intensity detectors. The ToF pixels are based on SPADs, information about the standard intensity detectors is not given.

The complete chip is relatively large : 4.7 mm x 6.7 mm, while the pixels only take up about 2 x 0.15 mm x 1.6 mm (guess !). So a huge amount of the chip area is used for memory, DSP, TDC, etc. Evaluation results show a very high accuracy of the distance measurements. According to the presenter, this chip is outperforming all other state-of-the-art technologies.

Paper presented by O. Shcherbakova (University of Trento) : “3D camera based on linear-mode gain-modulated avalanche photodiodes”. The technology described in this paper tries to improve the existing 3D sensors w.r.t. power consumption, frame rate and precision. The ToF method applied is making use of the continuous wave ToF. The heart of the sensor is the photodetector plus the demodulator of the signal and these are based on avalanche photodiodes. The device is fabricated in 0.35 um CMOS 1P4M, pixels are 30 um x 30 um with a fill factor of 25.7 %. The demodulation contrast reported is pretty high : 80 % at 200 MHz and 650 nm, maximum frame rate 200 fps. The precision of the depth sensing is 1.9 cm at 2m distance and 5.7 cm at 4.75 m distance. Worthwhile to mention : this paper had a live demonstration during the demo-session, the only one of the image sensor session.

Albert, 23-01-2013.

International Solid-State Circuits Conference 2013 (2)

Friday, February 22nd, 2013

Next are two (short = 15 min) presentations of imagers with a 3D-fabrication technology.  The first paper was coming from Olympus, entitled “A rolling-shutter distortion-free 3D stacked image sensor with -160 dB parasitic light sensitivity in-pixel storage node”, by J. Aoki.  The device is made out of a double layer structure : the top layer holds the BSI photodiode array, the bottom layer has the storage as well as the column processing present.  The architecture of the pixels is a 4-shared BSI-PPD pixel structures with all 4 photodiodes, four transfer transistors, one floating diffusion one reset transistor and one source follower in the top layer.  Next a bump is connecting the source follower from the top layer to the bottom layer.  In the latter the select transistor is present plus 4 sample-and-hold switches and capacitors.  These are acting as the storage nodes to construct the global shutter.  Next these storage nodes are provided with an individual source follower and select transistor.  So for every group of 4 pixels, one micro-bump is needed to provide the electrical contact.  Between the two layer an opaque shield is inserted to shield the storage nodes from any incoming light.  That is the explanation of the -160 dB light shielding efficiency.

Very simple, but apparently very efficient solution.  Nevertheless only very limited performance data was shown.  Pixel size is 4.3 um x 4.3 um, 30 frames/s, minimum bump pitch 8.6 um, 704 x 512 pixels and fabricated in 0.18 um 1P6M process.  Unfortunately no data about noise or dark current.  Remarkable is the mentioned full well capacity : 30,000 HOLES.  Although no further comments were given (neither asked) : this is a hole detector with all circuitry based on p-MOS transistors.

Next on in line was the Sony presentation by S. Sukegawa : “A 1/4-inch 8M pixel back-illuminated stacked CMOS image sensor”.  The basic idea is to use the carrier substrate of the BSI structure as an active layer and put all the circuitry onto/into this carrier layer.  Very simple, straight forward but a challenging technology !  In the device presented, the connection between the two layers is made by TSVs.  These TSVs are located at the outside of the die, so no connections or TSVs in the active area.  Unfortunately no pictures or cross-sections, neither any data was given about the TSVs.

As far as circuitry on the top layer is concerned, the following is included : full imaging array, addressing means as well as the comparators in the column circuitry which are front-end part of the column-level ADC.  The counters, being the back-end part of the column-level ADCs are located in the second layer.  This architecture suggests that every column has a TSV, or that a limited number of TSVs is used in combination with a multiplexer and de-multiplexer.  But no information was given about this.

The top part was fabricated in a 90 nm CIS process, the bottom part in a 65 nm logic process, containing 2.4 Mgates.  The overall chip size is 70 % of the one that was made in one single layer.

As far as the CFA is concerned : RGBW arrangement is used, firstly reshaped in a Bayer pattern and next demosaiced.  The device also has the option to alternately have lines with long and lines with short exposure time to extend the dynamic range.  So overall it is not surprising that that many logic gates are used in the bottom layer, it contains a lot of image processing stuff.  Some key performance parameters : 5000 electrons full well for a pixel of 1.12 um x 1.12 um, 30 fps in full resolution, 2.2 electrons of noise with an analog gain of 18 dB and a conversion gain of 63.2 uV/electron.

Albert, 22-01-2013.

 

International Solid-State Circuits Conference 2013 (1)

Thursday, February 21st, 2013

Today, Wednesday 21th, 2013, the imagers were presented at the ISSCC in San Francisco.  In this (and more-to-come) blog I would like to give a short review of the presented material.  As usual I try to do this without figures or drawings, not to violate any copyrights of the authors and/or of ISSCC.

The image sensor session kicked off with two papers from University of Michigan.  The first one, delivered by J. Choi was entitled “A 3.4 uW CMOS image sensor with embedded feature-extraction algorithm for motion-triggered object-of-interest imaging”.  The basic idea is to develop an imager that can be used in a large sensor network and will be characterized by a minimum power consumption.  For this purpose, a motion-triggered sensor is developed.  That is not really new, but in the paper, once the sensor is triggered it moves into an object-of-interest mode, instead of a region-of-interest.  So the sensor recognizes persons and tries to track them.  All circuitry needed for that is included in the pixel and/or on the chip.

In standard (sleeping) mode the sensor delivers a 1-bit motion sensing frame, once a moving object is recognized, the sensor wakes up and switches into an 8-bit object detection and object tracking mode.  Technically seen, the sensor has a pretty clever pixel design, with an in-pixel memory capacitor for frame storage (to be used to detect motion).  But most inventive is the combination of the circuitry of two pixels to build a low-power output stage, operated at 1.8 V.  So the pixel circuitry is reconfigurable depending on the mode of operation, this reconfigurability allows the low voltage supply and results in the low power.   

The recognition of objects (persons) is based on a “gradient-to-angle” converter, which is implemented on-chip.  By making smart use of simple switched-capacitor circuitry, complicated trigonometric calculations can be avoided. 

Second paper of the same university was delivered by G. Kim : “A 467 nW CMOS visual motion sensor with temporal averaging and pixel aggregation”.  Basically the same application : ultra-low power sensor with motion detection to wake up the sensor.  The device developed makes use of 4 different pixel designs/functionalities in every 8 x 8 kernel of pixels.  These different type of pixels allow the sensor to extend its range of motion detection, from slow motion of the objects to fast motion of the objects.  The “temporal averaging” in the title of the paper is referring to one of the pixel types with a long exposure time, the “pixel aggregation” in the title of the paper is referring to the aggregation/summation of signals coming from 16 pixel out of the group of 8 x 8 pixels. 

Worthwhile to notice : the device is fabricated in a standard logic 0.13 um CMOS process. 1P8M, so no PPD !  During the paper presentation, the author gave a lot of details about the design as well as about the working principle of the various pixels. 

Albert, 22-01-2013.