Yoshioka (Toshiba) presented a 20 channel TDC/ADC Hybrid SoC with 200m ranging LiDAR. The goal of this work was to develop a LiDAR system that can work for short distances (urban driving situations) as well as for larger distances, up to 200m (high-way situations, but not for Germany, there you need even larger distances !). The complete system relies on a “Smart Accumulation Technique” which was explained during the presentation, but which is too complicated to explain here in a few words. The advantage of this SAT method is the combination of working at longer distances while maintaining image quality. Special attention was given to the choice of the LiDAR quantizer circuitry. A TDC is fast and consumes a small area, but acquires only the ToF info, while an ADC is compatible with the proposed SAT method, but overall a ADC is slow and consumes a large area. But if it is difficult to make a choice, then implement both ! And that is what is done in this hybrid concept : the TDC is used for short distances (0 … 20 m), the ADC is used for larger distances (20 … 200 m).
Bamji (Microsoft) presented a kind of follow-on of the work presented a few years at ISSCC and very well described in JSSC (they got the Walter Kosonocky Award for this publication). A 1 Mpixel ToF image sensor showed impressive results : 3.5 um pixels, GS, 320 MHz demodulation, modulation contrast at 320 MHz is 78 %, QE of 44 % at 860 nm, and with a per-pixel AGC to obtain HDR. The device is realized in 65 nm BSI (1P8M) process. Unfortunately it was not explained (even not in Q&A) how the GS with an efficiency of 99.8 % is realized.
Ximenes (TU Delft) presented a 256×256 SPAD based ToF sensor. The device is made in 45 nm (then you should be able to guess who fabricated the chip) and is stacked on a 65 nm logic chip. It was claimed that for the very first time the digital supporting part of the detector was fully digitally synthesized. Pixel pitch is 19.8 um with a fill factor of 31.3 %. The overall system allows a distance range of 150 … 430 m with a precision in the order of 0.1 % and an accuracy of around 0.3 %.
Gasparini (FDK) closed the session with a talk about a 32×32 pixel array for quantum physics applications. The presenter tried to explain the concept of entangled photons, but I am not sure whether all people in the audience understood this concept after a full day of presentations. The sensor is based on SPADs, each with its own quenching circuit and TDC. The chip was realized in a low-cost 150 nm 1P6M CMOS standard technology, pixel size 44.64 um (with in-pixel time-stamping, not surprising if David Stoppa is one of the co-authors) and almost 20 % fill-factor. Implemented are features to skip rows and frames to speed the overall imaging system.
Overall the imaging session of the ISSCC2018 was one of high quality, very well prepared presentations, great sheets (16:9 for the first time at ISSCC). There was a large audience (the largest ever for the image sensor session at this conference), and during the Q&A long waiting lines were piling up in front of the microphone.
Take away message : everything goes faster, lower supply voltages, lower power consumption, stacking is becoming the key technology, and apparently, the end of the developments in our field is not yet near ! The future looks bright for the imaging engineers !!!
Albert, 15-02-2018.