Archive for February, 2019

ISSCC (3)

Thursday, February 21st, 2019

University of Toronto in cooperation with Synopsys and FBK presented “Dual-tap pipelined-code-memory coded-exposure pixel CMOS image sensor for multi-exposure single-frame computational imaging” (try to explain this to your mother !).  The basic idea comes down to the fact that with a coded aperture quite some information is thrown away (e.g. 50 %) when a particular aperture is opaque for the incoming light.  Only when the aperture is transparent, the incoming information is used.  In this paper, a pixel is presented that has one large PPD and two TG-FD-SF combinations.  The information is read out and accumulated through the first FD or through the second FD.  In this way no incoming photons are lost.  The content of the first FD can be seen as a kind of complement to the content of the second FD and vice versa.  The pixel looks very similar to some pixels presented in ToF applications.  But for the coding (switching between the two FDs), an in-pixel memory is needed.  That is composed out of two latches.  The pixel size is 11.2 um, a fill factor of 45.3 % is achieved, 27 % of the pixel area goes to the memory and extra logic in the pixel.  The contrast between the two taps is reported to be 99 % at 180 fps.  The device is fabricated in a 110 nm CIS process.

Applications mentioned for this sensor are one-shot structured light, one-shot  photometric stereo, compressive sensing, etc.

 

The sixth paper in the session came from Panasonic : A 400 x 400 pixel 6 um pitch vertical avalanche photodiodes (VAPD) CMOS image sensor based on 150ps-fast capacitive relaxation quenching (RQ) in Geiger mode for synthesis of arbitrary gain images”.   The main goal of this work is to incorporate a single photon avalanche photodiode function into a conventional CMOS image sensor pixels.  The pixel proposed in this paper looks identical to a 4T pixel, except that the PPD is replace by an avalanche photodiode.  Because the gain of the avalanche photodiode is not known, it looks like that the application for this device is limited to binary images, which can be used in time-of-flight, surveillance, AI and robotics.

 

The next paper was presented by Univ. of Edinburgh in cooperation with ST and Heriot-Watt University : “A 246×256 40nm/90nm CMOS 3D-stacked 120dB dynamic range reconfigurable time resolved SPAD imager”.  The design challenges seen by the authors are (1 Mpixel SPAD @ 100 MCps/pixel in high background conditions) : 100 Tphoton events/second. Being more than 1 Pb/s (10 bits conversion) and 50 W TDC power.  The presentation was built-up to highlight the advantages of SPADs (time resolution, no ADC, high dynamic range, single photon sensitivity, low median dark noise) and to counteract the disadvantages of SPADs (power consumption, TDC area, high I/O data rate, low fill-factor, many hot pixels.

During the presentation all the drawbacks were addressed one after another and solutions were proposed and implemented to counteract them :

  • Stacking of the backside illumination SPAD above the readout IC (40 nm/90 nm process) is solving the fill factor issue, (pixel pitch of 9.2 um),
  • In-pixel histogramming reduces the I/O data rate,
  • The pixels are composed out of multi-SPADs with the option to inhibit pixels with a large dark current count (also called “screamers”),
  • An event driven clocking is reducing the power consumption,
  • Implementation of compact TDCs, ring oscillator based in combination with shift-register and counters (TDC area of 130 um2).

Albert, 21-02-19

ISSCC (2)

Wednesday, February 20th, 2019

“A Data Compressive 1.5b/2.75b Log-Gradient QVGA Image Sensor with Multi-Scale Readout for Always-On Object Detection” by Stanford Univ. and Robert Bosch.

Object detection in the classical way can be done by the combination of a convolutional neural network connected behind an image sensor.  But that solution is pretty power hungry and not really suited for an always-on application.  This paper suggests to do a coarse detection and feature extraction right after the sensor and if something is detected, a wake-up signal is generated to activate a convolutional neural network.  The image sensor with the coarse detector and feature extractor (in the analog domain) are integrated on the image sensor chip.  A very popular feature extraction is based on a so-called Histogram of Oriented Gradients (HOG).  So one tries to find for 8×8 pixels the orientation of the gradient in the image data, and this method can be repeated for multiple scales of the image.

The trick of the HOG implementation in this paper is not looking after the difference in image values to detect an orientation in the image data, but to go after the logarithmic ratio of pixel values in the image data.  Making ratios instead of differences makes the concept invariant to the illumination level.  Simple, but very effective idea.  The log gradients can be quantized to 1.5 bit or 2.75 bit and still being robust to illumination levels.

The design highlights of this device are recognized as : the log-gradient image sensor using standard 4T pixels in a Q-VGA configuration, compressing the data to 1.5 bits or 2.75 bits, being a 25 x data compression; option for pixel binning at the readout for multi-scale object detection; column parallel ratio-to-digital converters to digitize low resolution ratios of the pixels.

If it comes down to performance numbers, an energy per pixel of 127 pJ is reported, 0.13 um 1P4M process, pixel size of 5 um.  Example images were shown to proof the concept of the image sensor.

“A 76mW 500fps VGA CMOS Image Sensor with Time-Stretched Single-Slope ADCs Achieving 1.95 e Random Noise” by Yonsei University.  Single-slope ADCs on column level, are widely used in CIS.  So everyone knows the advantages, but also the disadvantages of requiring a lot of clock cycles.  In the case one wants a fast conversion rate of for instance 1 us, a clock frequency in the GHz range is needed.  This paper tries to find a solution for this issue, a kind of hybrid ADC is proposed.  In the case of a 10 bit ADC, the 6 most-significant bits are converted by a classical single slope ADC.  What is further measured (to find the 4 least-significant bits) is the time between the toggling of the comparator and the next clock cycle of the single-slope ADC.  The toggling moment of the comparator is the start of the so-called time stretching activity.  The end of the time stretching activity is equal to the falling edge of the ADC clock divided by 16.  Why divided by 16 ?  For the simple reason that in that case the original clock of the ADC can also be used to measure the number of clock cycles in the time stretched period.  Very simple, very clever idea !  The total amount of pulses in the new ADC (10 bits) is now 64 cycles for the first 6 bits, and maximum 16 cycles for the time stretched value.  In total 80 cycles instead of 1024 (12.8 x faster !), and a conversion of 0.8 us can be realized by a clock of 100 MHz.  The new ADC not only results in a faster device, but also it consumes much less power (76 mW instead of 365 mW with the standard SS-ADC).

The reported device is a VGA sensor with 4 um pixel pitch, fabricated in 110 nm 1P4M technology.  The noise values listed are 1.95 e with a gain of 8, 2.85 e with a gain of 1.

Albert, 20-02-19

ISSCC 2019 (1)

Tuesday, February 19th, 2019

SmartSens presented a paper entitled : A stacked global-shutter CMOS with SC-type hybrid-GS pixel and self-knee point calibration single-frame HDR and on-chip binarization algorithm for smart vision applications.  This was a paper describing an image sensor in which several already known ideas are combined.  The pixel of the imager is more or less the same as the one that is used by CMOSIS (now ams) : a global shutter pixel with the storage node in the voltage domain.  Actually two storage nodes to sample reset and signal values to allow CDS.  Where the CMOSIS pixel has an in-pixel current source for the first follower, the SmartSens pixel has instead a row-select switch to allow the pixel to run in a rolling mode without the extra sampling in the pixel for the global shutter mode.  So you can run the pixel in rolling or in global shutter mode (that refers to the word hybrid in the title of the paper).

HDR is obtained by biasing the TX gate at two levels during the exposure time.  In the first part of the exposure time TX gets an intermediate value which limits the full well of the PPD, and in the second part of the exposure time TX gets a low value to increase the full well of the PPD.  Also this is a known technique, and it is also known that the creation of a knee point in the output characteristic will create great fixed-pattern noise issues.  But in this paper, a calibration is done (on-chip) to cancel out the FPN.  And this is an interesting method : by an appropriate clocking of the reset drain, reset gate and transfer gate, the pinned photodiode is completely filled with charges to saturation, and next the pixel is readout to measure the saturation level.  All this is done on-chip.  The method of filling the PPD through the reset and transfer transistor is neither new (developed by TU Delft), but to use this method for the on-chip calibration is new.

The readout chain is based on column parallel 13-bit counting ADCs with digital CDS.

The stacking technology (45 nm/65 nm, TSMC) has several interesting advantages, such as the use of the MIM caps for the in-chip sample-and-holding.  In this way the caps can be made larger (= lower kTC) and the presence of the caps has no influence on the fill factor.  Another advantage is the quantum efficiency, reported is 95 % in green and 36 % at 940 nm.  Nothing is mentioned about MTF, the high QE at 940 nm suggests that a thick epi layer is used, and that is not always beneficial for MTF.

Some more numbers : pixel size 2.7 um, 110 dB dynamic range (with HDR), PRNU is 0.6 %, full well is 10,000 electrons and the random noise is 3.5 electrons.  Shutter efficiency 20,000:1.

University of Michigan presented “Energy-efficient low-noise CMOS image sensor with capacitor array-assisted charge-injection SAR ADC for motion-triggered low-power IoT applications”.  Quite some time of the presentation was spent on the working principle of the ADC, apparently the ADC concept was already presented at ISSCC2016.  In a few words : the large capacitor array needed in a classical SAR is replaced by a current/charge injector which is controlled by a digital switch.

Motion detection in a sensor can be done in the pixel (requires a pixel modification with reduced image quality), outside the array (requires extra memory) or in the column (called near pixel).  The latter concept is used in this presentation.  Only a minor extra hardware in column is added to allow the motion detection capability of the imager.

Pixel size is 1.5 um, 792 x 528 pixels, 65 nm 1P3M technology of TPSCo.  Interesting to see was the power breakdown : energy/frame/pix = 63.6 pJ (ADC + pixel).

Albert, 19-02-19