Archive for February, 2017

ISSCC 2017 (4)

Friday, February 10th, 2017

“A 0.44 e rms read-noise 32fps 0.5 Mpixel high-sensitivity RG-less-pixel CMOS image sensor using bootstrapping reset” from Shizuoka University was presented by T. Wang.  The device is using correlated multiple sampling (CMS) on column level in combination with a high conversion gain.  The latter is obtained by reset gate-less pixel and a bootstrapping technique.  The final result is a conversion gain of over 150 uV/electron.  The reset-gate less pixel is not really new, this is already published by the same group at other conferences.  By means of carefully designing the distance between the floating diffusion and the reset drain diode, the reset-gate less device can be operated.  But in this paper the extra bootstrapping technique is added to allow a larger voltage swing of the pixel.  Pictures of a scene illuminated at 0.1 lux were shown (after averaging 16 images !).  Pixel size is 11.2 um, with a full well of 4100 electrons.  The read noise is as low as 0.44 electrons rms.  Despite of the low full well, still a dynamic range of 72.3 dB is mentioned.

The last paper in the imaging session was entitled “A 1ms high-speed vision chip with 3D stacked 140GOPS column-parallel PEs for spatio-temporal image processing” by T. Yamazaki of Sony.  The device is really fully exploiting the capabilities of the 3D stacking.  In the second layer of silicon a memory is included next to column level processing elements and the column level ADC.  In this bottom silicon layer, filtering of the data can be done, as well as target detection, target tracking and feature extraction.  The speed at which all operations are done is simply phenomenal.  The imaging part is made in a 90 nm 1P4M process, the bottom part is made in a 40 nm 1P7M process.  Pixel size is 3.5 um, full well is 19,800 electrons, random noise is 2.1 electrons, resulting in 80 dB dynamic range at 12 bits.  As mentioned in the title, the processing in the spatio-temporal domain can be done at a speed of 1 ms.

Albert, 10-2-2017.

ISSCC 2017 (3)

Thursday, February 9th, 2017

Tsutomu Haruta of Sony presented “A ½.3 inch 20 Mpixel 3-layer stacked CMOS image sensor with DRAM”.  In just a few words : the sensor is composed out of 3 layers : top layer contains the photon conversion part (BSI), the middle layer contains a DRAM and the bottom layer contains the processing part.  The first time that a stacked imager with 3 layers is shown.  The mutual connections between the various levels of silicon are realized by TSVs.  The image part can be readout very fast, much faster than the interface with the external world can handle.  So the DRAM is used as an intermediate frame buffer : fast readout of the imaging part and data stored in the DRAM, next a slow readout of the DRAM to accommodate the slow interface of the total system.  The pixels are arranged in a 2 x 4 shared pixel concept, with 8 column readout lines for two groups of 2 x 4 pixels.  4 rows of column level ADCs are included to allow the fast readout of the focal plane.  Remarkable is the fact that the data generated in the top layer has to be transported in the analog domain to the lowest level where the ADC is located.  Next the digital data is stored into the middle layer, being the DRAM.  It was not mentioned during the presentation, neither during Q&A, why the DRAM is located between the top and bottom layers.

With this particular architecture of the system, one can readout the sensor part extremely fast into the DRAM and one can readout the DRAM relatively slowly towards the outside world.  In this way artefacts of the rolling shutter are limited.  Once the data available in the DRAM, it is also possible to work in different formats, even in parallel with each other : full resolution, or limited resolution as a kind of digital zoom.  Another very nice feature of the sensor is its binning capability : by combining a binning on the floating diffusion with the binning in the voltage domain, the resolution of the imager can be drastically reduced.  If this reduced resolution image is then sampled at a high speed, stored in DRAM and retrieved at a lower speed, an “on-chip” slow-motion is created.  In the binned lower-resolution mode, it is possible to store 63 frames in the DRAM, captured at a speed of 960 fps.  Demonstrations of this feature during and after the presentation were showed.  Great images !

Some numbers : in total 17 layers of interconnect are used in the 3-layered stacked imager : 6M for the CIS (90 nm), 4M for the DRAM (30 nm) and 7M for the logic (40 nm).  The imager has 21 Mpixels, 1.22 um pixel pitch, DRAM has 1 G bit, and the interface is MIPI based.

Shiníchi Machida of Panasonic presented a paper entitled : “A 2.1 Mpixel organic-film stacked RGB-IR image sensor with electrically controllable IR sensitivity”.  Panasonic presented already a couple of papers with organic films on last year’s ISSCC.  But in this new presentation, 2 organic films are stacked on top of each other : the top one is sensitive to IR light, the bottom one is sensitive to RGB.  Both layers need a particular voltage across them to become light sensitive, and this light sensitivity has a particular step function.  Below a kind of threshold voltage the organic film is not light sensitive and this threshold voltage differs between the RGB (low threshold) and the IR (high threshold).  So if a large voltage is applied across the sandwich of the two organic films, both become light sensitive, if a lower voltage is applied across the sandwich only the RGB-film is becoming light sensitive.  In this way the light sensitivity of the IR-film can be switched on and off while the RGB-film is still active.  (Although the sensitivity of the RGB-film drops to about 50 % if the IR-film is switched off).  Overall an interesting feature that other imagers with classical pixels cannot shown.  Unfortunately (just like last year) no information was given about noise, neither about dark performance, otherwise a good presentation.

Albert, 9-2-2017.

ISSCC 2017 (2)

Wednesday, February 8th, 2017

Wootaek Lim of University of Michigan talked about “A sub-nW 80mlx-to-1.26Mlx self-referencing light-to-digital converter with AlGaAs photodiode”.  The work is focusing on a wearable image sensor for instance to acquire a measurement of the cumulative light exposure a person gets over a long period of time (e.g. UV radiation exposure).  Crucial parameters for this application are low power consumption, wide dynamic range and low relative error.  These requirements are realized by using a special ring oscillator and counter as an integrating ADC, use a photodiode voltage as the input in combination with a divider to extend the measurable voltage range, and linearly coding the light intensity in the log-log domain.  All various techniques were explained in detail including circuit diagrams.  As a result, with these news techniques, the power was reduced over 1000 x, the dynamic range was extended up from 1.26 Mlx (starting from 80mlx), all combined with the lowest conversion energy of 4.13 nJ/conv. at 50klx.  The sensor is fully functional between -20 and +85 deg.C.

 

“A 1.8 e temporal noise over 110dB dynamic range 3.4 um pixel pitch global shutter CMOS image sensor with dual-gain amplifiers, SS-ADC and multiple accumulation shutter” by Masahiro Kobayashi of Canon.  This was a great paper with a great presentation of the obtained results, but I did have serious doubts about the novelty of the work (and I was not the only one).  What is done is the implementation of a global shutter with a storage node in the charge domain.  This results in the so-called 6T transistor architecture.  To increase the fillfactor of the pixels, 2-by-1 sharing is applied.  In a classical GS pixel, the charge needs to be stored on the PPD, on the SG and on the FD.  If they are all equal to each other in capacitive value, a particular full well is obtained which is pretty limited.  The idea now is to make the PPD smaller and the SG larger.  In that case the full well would be determined by the small PPD, but during the exposure the PPD can be emptied multiple times and then the weakest link in the chain is shifted to the larger SG.  This is not new, Canon themselves introduced this already at IEDM 2016, but also Aptina published a similar solution at the IISW in 2009.  Nevertheless, besides this general idea, the presented sensor has a funnel-shaped light guide structure above the pixels, an optimized light shield to keep the PLS low.  To enhance the dynamic range of the sensor, the columns are provided with a gain stage that automatically choses between a gain of 1x or 4x.  With some clever timing of the transfer of the PPD and with an increased readout speed of the sensor, extra new option can be added, such as wider dynamic range and in-pixel coded exposure.

Results and images were shown during the presentation, despite of the fact that not everything is/was new, the results were impressive.  5 Mpixels, up to 120 fps, 450 mW, pixel pitch 3,4 um, 130 nm 1P4M +LS process, 1.8 e noise floor, maximum 79 dB dynamic range and in the HDR mode 111 dB, 20 e/s dark current at 60 deg.C.

Albert, 8-2-2017.

ISSCC 2017 (1)

Tuesday, February 7th, 2017

Bongki Son of Samsung presented a paper “A 640 x 480 dynamic vision sensor with 9um pixel and 300MEPS address-event representation”.  This work reminds me very much of the research of Tobi Delbruck and of the projects of Chronocam.  A sensor is developed that does not generate standard images but only indicates in which pixels there is a change from frame to frame.  The pixel that is used in this application is pretty complex with more than 10 transistors and at least two caps per pixel.  The results shown at the end of the presentation were quite impressive of what can be achieved by such a device.

InSilixa presented a paper “A fully integrated CMOS fluorescence biochip for multiplex polymerase chain reaction (PCR) processes”.  This disposable CMOS biochip allows DNA analysis with a flow-through fluidic system.  The chip has 32 x 32 DNA biosensors included on the chip.  Next to the photosensitive part in every pixel, quite some circuitry is included as well.  Even a heater (fabricated in metal 4) is part of every pixel.  Another critical feature of the design is the on-chip interference filter that needs to block the excitation light (around 500 nm), but needs to allow passing the low-light level fluorescence light that needs to be detected (around 590 nm).

Min-Woong Seo of Shizuoka University presented “A programmable sub-nanosecond time-gated 4-tap lock-in pixel CMOS image sensor for real time fluorescence lifetime imaging microscopy”.  Also in this case the pixel is pretty large and does contain a lot of extra electronics next to the light sensitive area.  The modulation pixel has 4 taps which are addressed every 0.9 ns (= very fast !).  The pixel looks very much the same as a CMOS 4T pixel with a charge storage node for global shuttering.  But in this case the pixel has 4 charge nodes to store information.  It is not the first time that Shizuoka University is publishing pixels for ToF applications, and I am always very much intrigued by their device simulations (they use the same tools as Delft University of Technology is using).  It is indeed amazing to see how narrow channel effects are being used in this pixel to speed up the device.

Albert, 7-2-2017.