ISSCC Report (4)

Continuation of previous blog.

 

       S. Itoh et al. (Shizuoka University and Toyota) : “CMOS imager for LED based optical communication”.  This paper described an application I never heard of before : traffic lights, road signs and backlights of cars can be made of LEDs that send out a certain coded signal while normally activated.  These codes can be captured by a camera and the output of the camera can be used to drive/steer/manage a car driving on the road.  One prerequisite is that all the traffic lights, road signs and backlights need to be made out of LEDs !

The authors described a CMOS image sensor with a double set of pixels organized in interleaved columns, for instance, all odd columns contain pixels that can be used for normal imaging, all even columns contain pixels intended for the capturing of the coded signals transmitted by the LEDs of the traffic lights, road signs and other cars on the road.  The pixels in the even columns are also based on a pinned photodiode in combination with a lateral charge overflow drain.  Very interesting application in traffic control !

 

       S. Mandai et al. (University of Tokyo) : “3D range-finding image sensor”.  A 3D method based on a projected sheet beam is presented.  When using a 2D imaging array it takes quite a bit of time to analyze the image and to “find” the reflected beam in the image.  In this paper a clever row-parallel search and address encoding architecture is presented.  It makes use of a row-parallel binary search tree and address encoding for high-speed detection and read-out of activated pixel addresses.  In this way the speed in detection of the depth in the image can be increased by an order of magnitude. 

 

       D. Stoppa et al. (Fondazione Bruno Kessler) : “Range image sensor based on lock-in pixels”.  The range finding architecture based on lock-in pixels is not new, but the presented pixel is.  The authors showed a (prior art) lock-in pixel based on a pinned photodiode, but illustrated also the drawbacks of this pixel : limited speed, because of the electric drift field in the pixel is too low.  By increasing the drift field in the pixel the speed can be increased, and this is done by adding two poly-Si gates on top of the pinned photodiode.  These two poly-Si gates will be toggled during the operation.  A simple idea, but it has some very interesting benefits : the authors show that the speed of the pixels can be increased (almost by two orders of magnitude) as well as the demodulation contrast (more than a factor of 2).  It should be remarked that, although the authors call this a pinned photodiode pixel, actually the pixels are not pinned.  Gates on top of a real pinned photodiode will not create any drift field in the pixels, but in the paper the top-layer of the pinned photodiode is no longer a p+, but a p-.  Actually what is realized is not a pinned photodiode, but a buried CCD p-type channel.  Basically, it is not important how the thing is called, the performance is what really matters.

 

       T. Azuma et al. (Panasonic) : “4K x 2K image sensor based on dual resolution and exposure technique”.  The presenter started with an interesting statement : the sensitivity of solid-state image sensors is increased by a factor of 40,000 compared to the device originally invented by Boyle and Smith.  Up to the reader to judge whether this is correct or not !

In the paper a sensor is described that tries to improve the sensitivity with a factor 4 more.  This is done by an addressing architecture that allows to read and treat the green pixels independently of the blue and red ones.  In this way the green pixels will get an exposure time 4 times as large as normal.  This is the sensitivity gain, but of course pretty nasty motion artifacts will be introduced.  The blue and the red pixels get the normal exposure time, but in the red and blue channel the gain is created by pixel binning in 4 x 4 mode (actually you get even a factor of 16 in sensitivity !).  Clever signal processing should use the information of the blue and red channel to correct the motion artifacts of the green channel. 

 

       H. Wakabayashi et al. (Sony) : “10.3 Mpix back-side illuminated imager”.  The authors presented the backside illuminated device that was already introduced by Suzuki during his plenary talk.  A few extra details were shown : the device can run at 50 fr/s at full resolution of 10.3 Mpix with 10 bits resolution.  In the case of 12 bits, the speed slows down to 22 fr/s, or in the case of 60 fr/s, the resolution goes down to 6.3 Mpix.  Operation at 120 frames/s is also possible with then with only 3.2 Mpix.  Apparently this sensor should be able to combine DSC and video in one device.  Pretty interesting numbers were given for the angle dependency : for a ray angle of 10 deg, the sensitivity drop is only 9 %, and moreover the difference between the three colour channels is less that 1 %.  At a ray angle of 20 deg. the remaining sensitivity is slightly less than 60 %, and the difference in colour channels is 3 % (based on the figure shown in the proceedings).  The presenter mentioned that the cross-talk of the back-side illuminated sensor is optimized by means of an extra metal shield at the back of the device, by optimized doping of the photodiode, by optimized doping of the substrate and by an optimized wiring of the metal layers on the front side. 

Some numbers : 0.14 um process, 1P4M, LVDS output, lag below the measurement limit, power consumption 375 mW for the HD video mode.  Most of the other performance data is mentioned in one of the earlier blogs.

 

Conclusion : very good image sensor session, very interesting papers, all presenters had very well-organized talks, high quality sheets.  Excellent work of Dan McGrath (and his crew), who is chairing the image-sensor subcommittee of ISSCC. 

 

Albert 2010-02-11

Leave a Reply