Electronic Imaging 2014 (1)

An interesting paper of Tohoku University was presented at the EI14.  They published their paper about a 20M frame/s sensor already a while ago at the ISSCC, but they never disclosed the pixel structure to empty the PPD within the extremely short frame times.  The EI14 paper was focusing on the pixel architecture and specifically on the PPD structure.  Miyauchi explained that two technological “tricks” are applied to create an electric field in the PPD to speed up the transfer of the photon-generated electrons from the PPD to the FD node.  Firstly a gradient in the n-doping is implemented by using three different n-dopings, secondly the n-regions are not simple rectangulars or a squares, but have the look of hedgehogs with all kind of sharp needles extending away from the FD node.  On one hand the lay-out of the triple n-implantation looks quite complicated, on the other hand it looks quite funny as well, but after all, it seems to be effective.

Simulations as well as measurement results were shown.  Simulated was a worst-case transfer time of 9ns, measured was a transfer time of about 5 ns.  These are very spectacular results taking into account that the pixel size is 32 x 32 um2.  As far as overall speed of the sensor is concerned : 10M frames/s are reported for a device with 100k pixels, 128 on-chip storage nodes for every pixel and a consumed power of 10W.  The device can also run in 50k pixels mode, with the same power consumption but then with a frame rate of 20M frames/s and with a storage capacity of 256 frames on-chip.

 

There were two papers that used the same image sensor concept : allow the pixels to integrate up to a particular saturation level, and record the time it takes to come to this point.  This idea is not really new (was it Orly who did this for the first time in her conditional reset idea ?), but the idea in which this concept is applied seems to be new.

El-Desouki (King Abdulaziz City for Science and Technology, Saudi Arabia) is using SPADs and is allowing the SPADs to count up its events to a certain defined number to convert an amount of light into a time slot, measures this time slot by converting it into the digital domain and is sending out this data.  A further sophistication of the idea is not to count in the digital domain (it needs too many transistors per pixel) but to do the counting in the analog domain.  Finally the author explained how one can make a TDI sensor based on this concept.

A bit more “out-of-the-box” was the concept introduced by Dietz (University of Kentucky).  Allow the pixels to integrate up to a certain level (e.g. saturation), record the time it takes to reach that point, and perform this action continuously in the time domain.  In this way one gets, for each pixel, a kind of analog signal describing the behavior of each pixel in the time domain.  This way of operating the pixel makes the sensor completely free of any frame rate.  If an image is needed, one can take whatever timeslot in the time domain that is recorded, take the analog signal out of the memory, and average the analog signal within this timeslot.  Of course every pixel needs a lot of processing as well as a huge storage space to record its behavior in the time domain.  But with the stacked concept of imager-processor-memory, the speaker was convinced that in the future this should be feasible.

Yonai (NHK, Japan) presented some new results obtained with the existing 33M UHDT sensor, already presented earlier in WKA winning paper.  But this time the authors changed the timing diagram such that the sensor was allowed to perform digital CDS off-chip.  Results : 50 times reduction in FPN (down to 1 electron) and 2 times reduction in thermal noise (down to 3 electrons @ 60 fr/s).

Kang (Samsung) presented some further sophistication of the RGB-Z sensor that was already presented by Kim at the ISSCC.  From one single imager, a normal RGB image can be generated, as well as a depth-map by using the imager in a ToF mode.  The author presented a simple, but intelligent technique to improve the performance of the device by removing any asymmetry in pixel design/lay-out/fabrication.  The technique applied is simply reversing the Q0 and Q180 from frame to frame.  Actually the technique looks very much the same as chopping in analog circuitry.

 

Albert

February 7th, 2014.

Leave a Reply