Today, Wednesday 21th, 2013, the imagers were presented at the ISSCC in San Francisco. In this (and more-to-come) blog I would like to give a short review of the presented material. As usual I try to do this without figures or drawings, not to violate any copyrights of the authors and/or of ISSCC.
The image sensor session kicked off with two papers from University of Michigan. The first one, delivered by J. Choi was entitled “A 3.4 uW CMOS image sensor with embedded feature-extraction algorithm for motion-triggered object-of-interest imaging”. The basic idea is to develop an imager that can be used in a large sensor network and will be characterized by a minimum power consumption. For this purpose, a motion-triggered sensor is developed. That is not really new, but in the paper, once the sensor is triggered it moves into an object-of-interest mode, instead of a region-of-interest. So the sensor recognizes persons and tries to track them. All circuitry needed for that is included in the pixel and/or on the chip.
In standard (sleeping) mode the sensor delivers a 1-bit motion sensing frame, once a moving object is recognized, the sensor wakes up and switches into an 8-bit object detection and object tracking mode. Technically seen, the sensor has a pretty clever pixel design, with an in-pixel memory capacitor for frame storage (to be used to detect motion). But most inventive is the combination of the circuitry of two pixels to build a low-power output stage, operated at 1.8 V. So the pixel circuitry is reconfigurable depending on the mode of operation, this reconfigurability allows the low voltage supply and results in the low power.
The recognition of objects (persons) is based on a “gradient-to-angle” converter, which is implemented on-chip. By making smart use of simple switched-capacitor circuitry, complicated trigonometric calculations can be avoided.
Second paper of the same university was delivered by G. Kim : “A 467 nW CMOS visual motion sensor with temporal averaging and pixel aggregation”. Basically the same application : ultra-low power sensor with motion detection to wake up the sensor. The device developed makes use of 4 different pixel designs/functionalities in every 8 x 8 kernel of pixels. These different type of pixels allow the sensor to extend its range of motion detection, from slow motion of the objects to fast motion of the objects. The “temporal averaging” in the title of the paper is referring to one of the pixel types with a long exposure time, the “pixel aggregation” in the title of the paper is referring to the aggregation/summation of signals coming from 16 pixel out of the group of 8 x 8 pixels.
Worthwhile to notice : the device is fabricated in a standard logic 0.13 um CMOS process. 1P8M, so no PPD ! During the paper presentation, the author gave a lot of details about the design as well as about the working principle of the various pixels.
Albert, 22-01-2013.