Archive for February, 2010

ISSCC Report (6)

Wednesday, February 17th, 2010

Continuation of previous blog.

 

       Makoto Ikeda (University of Tokyo) : “3D range image capture technologies”.  A detailed  overview of 3D capturing techniques was given :

o   triangulation (stereo matching, light section method),

o   time-of-flight (time measurements and correlation techniques) and,

o   interference method.

All methods discussed were illustrated with its theory, devices, circuits, and measurements.  Of course there does exist a single best solution for all situations.  All methods have their own benefits and limitations.  According to Ikeda-san, the stereo matching is limited by its computational speed, the light section is limited by the integration time, the mechanical speed and its robustness, the direct TOF is limited by the readout speed, dark current rate, and robustness, while the correlation method is limited by its integration time.

       Levy Gerzberg (Zoran) : “High Speed Digital Image Processing”.  Levy showed in his talk a few interesting example of digital image processing, such as blur correction (due to camera shake), lens distortion correction, dynamic range correction, colour management.  Unfortunately he did not explain the technical algorithm used to take all these measures. 

He also made clear that in the future the processing will become much more complex than ever before.  For instance, the algorithms applied in cameras will rely on the content of the images and can change from picture to picture.  He just mentioned one example : red eye correction will be different for children and adults.  So the processing needs to find out whether a child is present on a picture or whether it is an adult before the correction of the red eye can be done.

       Masatoshi Ishikawa (University of Tokyo) : “Vision Chips and Its Applications to Human Interface, Inspection, Bio/Medical Industry and Robotics”.  Ishikawa-san explained the need for “medium” speed imaging and explained a super vision chip containing in-pixel processing, and then the real show started !  Ishikawa showed fabulous demos, such as gesture recognition (with one of his head-banging students), multi-target tracking, 3D shape recognition (illustrated with a book flipping scanner), bio/medical demonstration with the inspection of moving sperm, robotics (illustration of a raw-egg catcher, a robot catching a mobile phone in free space, two robots playing base-ball, dynamic dribbling.

Together with the great examples of high-speed vision Ishikawa gave very funny details about the preparation of the demos.  He really “entertained” the audience.

       Ronald Kapusta and Katsu Nakamura (Analog Devices) : “High-Speed Analog Interfaces for Image Sensors”.  Nakamura explained how Analog Devices tries to fit their analog interface circuits to a wide variety of input signals coming from different sensors of different vendors, all with their own specification.  He explained the needs for input reconfigurability for inter-operability and the design challenges of the analog interfaces to achieve low crosstalk and matching at or below 14-bit level.  When he came to the explanation of a kTC-noise reduction circuit, an interesting discussion started about the presence and reduction of kTC noise in the analog circuitry Nakamura showed.

       Jean-Manuel Dassonville (Agilent) : “Test Challenges of High Speed Imaging Serial Busses in Mobile Devices”.  The agenda of this lecture was based on :

o   requirements and technology trends on imaging interconnects, such as higher throughput, less PCB space, less skew issues, less clock issues, provide mechanisms for reliable data transfer, reduce power consumption,

o   main attributes of high speed serial interconnects,

o   overview of some imaging interconnects (MIPI, HDMI, MDDI, DisplayPort),

o   typical test requirements and challenges.

 

The forum ended with a short panel discussion.  A strong program, excellent speakers, good documentation (with a lot of references included in the printed material) made this of exhausting day a great forum !

 

Albert 2010-02-17

ISSCC Report (5)

Sunday, February 14th, 2010

At the International Solid-State Circuits Conference a one-day forum was organized and chaired by Johannes Solhusvik (Aptina) entitled : “High-Speed Imaging Technologies”.  An appealing program was put together, and here I would like to list a few highlights.  After the opening by Johannes, the following topic were discussed :

       Boyd Fowler (Fairchild Imaging) : “High Speed CMOS Pixel Physics and Electronics”. Boyd’s talk showed that high-speed imaging is more than just a fast readout !  From his presentation the following lessons could be learned :

o   QE and MTF are inversely related : a larger QE needs to allow collection of deeply generated electrons, but unfortunately deeper generated electrons will freely diffuse through the non-depleted substrate and cause contrast losses,

o   carrier transport needs to be dominated by drift for high-speed detection, if the carrier transport is dominated by diffusion, it will take too long before the carriers are collected in the diodes,

o   lag must be eliminated during carrier transport,

o   digital readout (in-pixel ADC) is about an order magnitude faster than analog,

o   high speed sensors need to be fully depleted on a thin substrate, this is basically a combination of the aforementioned arguments.

Some of these statements were illustrated by means of simulations.

       Jan Bosiers (DALSA) : “High-Speed Imaging with CCDs”.  His talk contained details about :

o   high-speed operation of CCDs in general,

o   high-speed readout architectures,

o   high speed capture concepts and

o   details about a high speed CCD imaging sub-system.

Jan’s talk contained quite some details about the ISIS sensor of prof. Etoh (fabricated by DALSA).  This device (with BSI !) is capable to capture 1 M frames per second.  Great animations (with an NHK camera) showed the capabilities of this camera and sensor.

       Guy Meynants (CMOSIS) : “High speed CMOS image sensor architectures”.  In his talk Guy addressed the following topics :

o   High speed CIS architectures with an analog output, focusing on column load schemes, track-and-hold circuit, column amplifier, multiplexer design, ROI readout, column bus construction, analog buffers and power multiplexing,

o   High speed CIS architectures with ADC on-chip, with column ADC, global shuttering.

Guy’s conclusion was actually straight forward : parallelism is heavily applied to speed up the devices.  He illustrated this with many examples and with a nice demo at the end of his talk.

       Shoji Kawahito (Shizuoka University and Brookman Technology) : “Column readout circuit design for high-speed low-noise imaging”.  The following items were addressed :

o   Analog versus digital column readout,

o   Column readout with preamplifier,

o   Columns level ADC with accelerated readout timing,

o   Source follower noise analysis,

o   Column ADC with preamplifier and CDS,

o   Column parallel ADC architectures,

o   Digital CDS

o   Figure of merit of column ADC and imagers.

 

Will be continued,

 

Albert 14-02-2010.

ISSCC Report (4)

Thursday, February 11th, 2010

Continuation of previous blog.

 

       S. Itoh et al. (Shizuoka University and Toyota) : “CMOS imager for LED based optical communication”.  This paper described an application I never heard of before : traffic lights, road signs and backlights of cars can be made of LEDs that send out a certain coded signal while normally activated.  These codes can be captured by a camera and the output of the camera can be used to drive/steer/manage a car driving on the road.  One prerequisite is that all the traffic lights, road signs and backlights need to be made out of LEDs !

The authors described a CMOS image sensor with a double set of pixels organized in interleaved columns, for instance, all odd columns contain pixels that can be used for normal imaging, all even columns contain pixels intended for the capturing of the coded signals transmitted by the LEDs of the traffic lights, road signs and other cars on the road.  The pixels in the even columns are also based on a pinned photodiode in combination with a lateral charge overflow drain.  Very interesting application in traffic control !

 

       S. Mandai et al. (University of Tokyo) : “3D range-finding image sensor”.  A 3D method based on a projected sheet beam is presented.  When using a 2D imaging array it takes quite a bit of time to analyze the image and to “find” the reflected beam in the image.  In this paper a clever row-parallel search and address encoding architecture is presented.  It makes use of a row-parallel binary search tree and address encoding for high-speed detection and read-out of activated pixel addresses.  In this way the speed in detection of the depth in the image can be increased by an order of magnitude. 

 

       D. Stoppa et al. (Fondazione Bruno Kessler) : “Range image sensor based on lock-in pixels”.  The range finding architecture based on lock-in pixels is not new, but the presented pixel is.  The authors showed a (prior art) lock-in pixel based on a pinned photodiode, but illustrated also the drawbacks of this pixel : limited speed, because of the electric drift field in the pixel is too low.  By increasing the drift field in the pixel the speed can be increased, and this is done by adding two poly-Si gates on top of the pinned photodiode.  These two poly-Si gates will be toggled during the operation.  A simple idea, but it has some very interesting benefits : the authors show that the speed of the pixels can be increased (almost by two orders of magnitude) as well as the demodulation contrast (more than a factor of 2).  It should be remarked that, although the authors call this a pinned photodiode pixel, actually the pixels are not pinned.  Gates on top of a real pinned photodiode will not create any drift field in the pixels, but in the paper the top-layer of the pinned photodiode is no longer a p+, but a p-.  Actually what is realized is not a pinned photodiode, but a buried CCD p-type channel.  Basically, it is not important how the thing is called, the performance is what really matters.

 

       T. Azuma et al. (Panasonic) : “4K x 2K image sensor based on dual resolution and exposure technique”.  The presenter started with an interesting statement : the sensitivity of solid-state image sensors is increased by a factor of 40,000 compared to the device originally invented by Boyle and Smith.  Up to the reader to judge whether this is correct or not !

In the paper a sensor is described that tries to improve the sensitivity with a factor 4 more.  This is done by an addressing architecture that allows to read and treat the green pixels independently of the blue and red ones.  In this way the green pixels will get an exposure time 4 times as large as normal.  This is the sensitivity gain, but of course pretty nasty motion artifacts will be introduced.  The blue and the red pixels get the normal exposure time, but in the red and blue channel the gain is created by pixel binning in 4 x 4 mode (actually you get even a factor of 16 in sensitivity !).  Clever signal processing should use the information of the blue and red channel to correct the motion artifacts of the green channel. 

 

       H. Wakabayashi et al. (Sony) : “10.3 Mpix back-side illuminated imager”.  The authors presented the backside illuminated device that was already introduced by Suzuki during his plenary talk.  A few extra details were shown : the device can run at 50 fr/s at full resolution of 10.3 Mpix with 10 bits resolution.  In the case of 12 bits, the speed slows down to 22 fr/s, or in the case of 60 fr/s, the resolution goes down to 6.3 Mpix.  Operation at 120 frames/s is also possible with then with only 3.2 Mpix.  Apparently this sensor should be able to combine DSC and video in one device.  Pretty interesting numbers were given for the angle dependency : for a ray angle of 10 deg, the sensitivity drop is only 9 %, and moreover the difference between the three colour channels is less that 1 %.  At a ray angle of 20 deg. the remaining sensitivity is slightly less than 60 %, and the difference in colour channels is 3 % (based on the figure shown in the proceedings).  The presenter mentioned that the cross-talk of the back-side illuminated sensor is optimized by means of an extra metal shield at the back of the device, by optimized doping of the photodiode, by optimized doping of the substrate and by an optimized wiring of the metal layers on the front side. 

Some numbers : 0.14 um process, 1P4M, LVDS output, lag below the measurement limit, power consumption 375 mW for the HD video mode.  Most of the other performance data is mentioned in one of the earlier blogs.

 

Conclusion : very good image sensor session, very interesting papers, all presenters had very well-organized talks, high quality sheets.  Excellent work of Dan McGrath (and his crew), who is chairing the image-sensor subcommittee of ISSCC. 

 

Albert 2010-02-11

ISSCC Report (3)

Thursday, February 11th, 2010

Wednesday Feb. 10th 2010 : Image Sensor session at the ISSCC2010.

 

The following papers were presented :

       Y. Chae et al. (Yonsei University and Samsung Electronics) : “2.1 Mpixel with column-parallel ADC architecture”.  The speaker made an overview of the column level ADCs used in CMOS imagers : single slope architecture is too slow, the successive approximation has an area issue, and the cyclic ADC consume too much power.  Apparently there is a need for another architecture !  A 2nd order Delta-Sigma ADC is implemented for the very first time as column-level ADC in a CMOS imager.  The circuitry per column is 4.5 um wide and 600 um long, it contains 320 transistors and consumes 55 uW/column.  A 2.1 Mpixel sensor is realized with the following characteristics : 2.25 um pixel, 2 sharing, 11000e- full well, 80 uV/e-, 0.013 % column FPN, 75 dB dynamic range, 180 mW @ 120 frames/s, 0.1 % non-linearity.  The noise is as low as 2.4 e- at maximum frame rate and 1.9 e- at 130 times ADC sampling.  The latter is an improvement of 54 % compared to the same device with a single slope ADC.  

 

       Y. Lim et al. (Samsung Electronics) : “1.1 e- Noise CMOS imager with pseudo-multiple sampling”.  Very simple idea with surprising results : instead of using one high-resolution ADC, the ADC conversion is divided into several lower resolution ADCs.  The latter are realized as single slope ADCs with the possibility of multiple up- and down-ramps to create the multi lower resolution ADCs.  In this way a kind of multiple sampling is realized and consequently the noise is being reduced.  The device is realized in a 90 nm technology, 2.5 T shared pixel of 1.4 um, 110 uV/e- conversion gain, and 4100 e- full well.  Sensitivity of 3700 e-/lux.s and a dark current of 6.4 e-/s @ 55 oC.  At a frame rate of 6 fr/s a noise level of 1.1 e- was obtained with 12 bit ADC and 16x gain.

 

       K. Yasutomi et al. (Shizuoka University) : “CMOS imager with dual global shutter pixels”.  It is known that with a 4T pixel in global shutter mode, the device suffers from noise issues : the leakage current of the floating diffusion is very high and in global shutter mode, CDS is no longer possible with a 4T pixel.  For that reason, the authors have implemented a double shutter in the pixels.  The first shutter implementation is a storage node between the pinned-photodiode and the floating diffusion.  This storage node is a kind of pinned-photodiode as well to keep the dark noise as low as possible.  Unfortunately the storage capacity is pretty small.  So these characteristics allow this first storage node to be used in the case of very small signals.  To transfer the charges from the pinned-photodiode into the storage node, the voltage in the storage node needs to be higher than in the pinned-photodiode, so the storage node has a different doping concentration.  (Interesting to realize that foundries are willing to change/add these implants even for university experiments !)  The construction of the storage node looks very similar to a virtual-phase CCD cell.  For larger signals, when the noise of the shutter node is less important, a second shutter architecture is used, being the classical one : storage on the floating diffusion.  These two storage nodes are designed fully in parallel.  Results : shutter efficiency of the first shutter is 99.7 %, of the classical one it is 99.9 %, dark current of the first shutter is 119 e-/s @ 27 oC and is 1221 e-/s for the second one, also at 27 oC.  The pixels are 7.5 um in size and have a fill-factor of 25 %.  Full well is 10000 e- and the conversion gain is 38 uV/e-.

 

       C. Posch et al. (Austrian Institute of Technology) : “Address event PWM image sensor with lossless pixel-level compression”.  Every frame in a video sequence contains a lot of redundant information, and based on that knowledge the authors created a kind of device that only outputs changes in the scenery.  In this way a large amount of output data can be reduced, without any loss in information.  The pixels become quite complex : 77 T, but the data reduction factor is quite impressive : up to 400 fully lossless.  During the presentation a very nice video demonstration was shown to illustrate the working principle of the device.

 

Will be continued !

 

Albert 2010-02-11

ISSCC Report (2)

Tuesday, February 9th, 2010

Being the Technical Program Chair of the ISSCC2010, yesterday morning (Monday Feb. 8th 2010) I had the honor to officially open the conference.  It was the first time in my life to speak in front of an audience of 2600 attendees.  Very impressive view looking from the podium over such a crowd of people.

 

At the International Solid-State Circuits Conference one of the four plenary talks was delivered by Tomoyuki Suzuki of Sony (Senior VP).  He gave an amazing overview of Sony’s history in the CCD field.  We all know that Sony has a long track record in high-performance CCD imagers, but nevertheless the improvements Sony implemented in the CCD technology is quite impressive.  Just to name a few (not necessarily all are invented by Sony but all are used in their products) :

       1987 : vertical overflow drain allowing an anti-blooming in the third dimension, without losing any fill factor in the pixel,

       1987 : HAD sensor, being the hole-accumulation diode or the pinned photodiode, with the capability of instant charge reset,

       1987 : on-chip colour filters,

       1989 : high energy implanter to allow a deeper p-well for the CCD, resulting in a better smear and a higher quantum efficiency,

       1989 : on-chip microlenses,

       1995 : on-chip colour resist filters,

       1995 : epitaxial wafers to reduce cross-talk and smear,

       1997 : inner lenses being a second microlens,

       1997 : gapless microlenses,

       2000 : double inner lenses,

       2000 : tungsten light shield for an improved smear engineering,

       2004: single layer transfer electrode, to increase yield and surface flatness of the sensors, resulting in a better angular response,

       2008 : new wiring technology for the electrodes aiming for high-speed imaging.

 

Looking through this list it must be possible to imagine or visualize the 3D stack of which the photodiode is part of.  Such a CCD pixel is a beautiful example of vertical integration starting deep in the silicon and extending several micrometers on top of the silicon.

 

Looking towards CMOS imaging, Suzuki-san named the following challenges :

Pixel shrinkage, for which he suggested to move from aluminum interconnects to copper interconnects,  this will drastically decrease the optical stack on top of the silicon.  Although this technique is not really new, he showed beautiful SEM cross-sections of the pixel structures,

Frame rate, for which he highlighted the column-level ADC architecture implemented by Sony and based on an up/down counter in every column, this work was awarded with the Walter Kosonocky Award in 2007,

Sensitivity of the small pixels for which he showed results of Sony’s back-side illumination technology.  He also showed some data coming from a new 10.3 Mpixel CMOS imager with 1.65 um pixel size : sensitivity almost 10,000 e/lux.s, saturation : 9130 e, conversion gain 75 uV/e, 1.7 e rms noise in dark (gain 16x), dark current 3 e/s at 60 oC, and a dynamic range of 71 dB.  Also of this sensor a very nice SEM cross section was shown which revealed some interesting details of the technology.

 

The following near-term trends for CMOS imagers were reported :

       Ultra high speed,

       Global shutter,

       Wide dynamic range,

       Increasing depth of field.

 

At the end of the talk Suzuki-san referred to two new future imaging functions being 3D imaging and curved image sensors.  In other words, an interesting future is lying ahead of us !  

 

Albert 09-02-2010

ISSCC Report (1)

Monday, February 8th, 2010

It is today that the ISSCC officially starts, but yesterday the tutorials, two forums and two evening sessions were organized.  For the imaging community, the “Silicon 3D-Integration, Technology and Systems”  forum was of interest. And especially the talk by Jean-Luc Jaffard of ST Microelectronics, entitled : “Chip Scale Camera Module Using Through Silicon Via Technology”.  The agenda of his presentation looked as follows :  imager market outlines, camera module physical size contributing factors, optical technologies, through silicon via process, camera module assembly and reliability, future evolutions and conclusions.

In general this talk could be seen as a great overview of the state-of-the-art of TSV for imagers.  For the non-imaging experts in the room the talk contained a lot of new information (and basically the forum was addressing a broad spectrum of interested people, not just imaging experts).  For the imaging engineers, the most interesting part was situated in the second half of the talk.

Jean-Luc showed a nice comparison of pros and cons of an injected plastic lens, a molded glass lens and wafer level optics.  Basically wafer level optics go hand-in-hand with TSV technology.  A series of cartoons illustrated the process of the TSVs, as well as the future integration of camera modules : wafer lens, TSV sensor, image processing, memory all stacked on top of each other as a real 3D masterpiece.  Discussing about the wafer level camera roadmap, Jean Luc mentioned that today the modules already make use of wafer level optics, but that the modules are still individually assembled.  Next step will be the combination of wafer level optics and TSV sensors bonded before dicing, while in a couple of years from now  we will see wafer level optics, including auto-focus means, bonded to a TSV sensor before dicing. 

If someone thinks that the development of camera modules soon will come to an end, the answer is simply NO ! 

Albert, 08-02-2010

ISSCC Report (0)

Sunday, February 7th, 2010

Wishing your a good mornig from my hotel room in San Francisco.  Basically the International Solid-State Circuit Conference is ready to start !  Over 95 % of the presenters arrived already and had yesterday, Saturday, a long rehearsing day.  Today the plenary speakers will rehearse in the big ballroom, while in parallel the educational activities will start.  On Sunday we will have 9 tutorial sessions (each 1.5 hour) running in two parallel tracks.  That allows those who are interested to attend maximum 2 or 3 tutorials.  Attending a tutorial is a very nice way of getting familiar with certain topics in new fields.  Also on Sunday the first 2 forums are organized.  A forum is a full-day event focusing on one particular topic with for instance about 7 invited top experts in their field.  In one of two today’s forums (“Silicon 3D Integration Technology and Systems” and “Reconfigurable RD and Data Converters” , Jean-Luc Jaffard of ST Microelectronics will talk about TSV for imaging modules.   I do hope that I can attend Jean-Luc’s presentation, I can not follow the complete program due to other obligations.  As there are the plenary rehearsals that I have to chair.

This evening a first evening session is organized (“Beyond CMOS – Emerging Technologies”)  in parallell to the Student Research Preview.  The latter gives MSc and PhD students the opportunity to present their work during a flash presentation and by means of a poster.

Starting from tomorrow, Monday, the paper presentations will start.  I do hope to give you a short update on a daily basis of what is/was important for the imaging community.  I hope you will like it.

Albert, 07-02-2010