Archive for March, 2011

Image Sensors Europe (4)

Friday, March 25th, 2011

 

Stephane Laveau (DxO) : “Colour shading : root causes and review of solutions”.

This presentation started with showing images of the effect in existing (high-end) mobile phone cameras.  Where does colour shading come from ?  Main causes can be found in :

          IR filter (IR cut is depending on the angle of incidence, thus spectrum varies with light angle),

          microlenses (creating optical crosstalk),

          colour filters,

          optical stack.

Factors that make colour shading more complicated to model and to correct :

          light sources, they come in various flavours,

          manufacturing tolerances resulting in different IR cut off wavelengths.

Correction methods :

          calibration per unit, but is very expensive,

          in the camera by means of white balance adjustment, but this has a limited application in the case of fluorescent or mixed lighting.

Adaptive correction : statistics will be obtained from a single image by looking for changes in hue.  Examples of this adaptive correction were shown to illustrate the improvements of the method.

 

Yang Ni (New Imaging Technologies) : “MAGIC : Logarithmic Image Sensing using photodiode in solar cell mode”. 

A classical method of doing wide dynamic range is the use of logarithmic pixels based on a transistor operating in weak inversion.  This solution is giving quite a bit of FPN, is suffering from a low sensitivity and shows a large image lag.  NIT has changed the concept of the logarithmic pixel : the photodiode is used in the solar cell mode, and the voltage across the PD is a logarithmic function of the current generated by light input.  Advantages : physically exact dark reference (no light input means 0 V across the diode), on-chip FPN correction, high sensitivity, no image lag.  The noise in this logarithmic mode pixel is equivalent to the noise of a 3T pixel, being the classical kTC noise (around 500 uV).  This noise level is constant in the logarithmic mode.  It does limit the dark performance of the sensor, but does not limit the dynamic range of the sensor.

A 768 x 576 is developed with 5.6 um pixel design, in 0.18 um CMOS process, fill factor is larger than 70 % (no micro-lenses), and the light detection threshold is smaller than 7 mlux @ 25 fps.  Good colour rendition can be maintained over 120 dB variation in light input, no white balance is needed, neither any tone mapping is needed.  Impressive videos were shown of the performance of the devices in a lab environment.

 

Sandro Tedde (Siemens) : ”Organic and hybrid photodiodes for active matrix imaging”.

Fabrication of the devices is based on a spray-coating technique on top of glass or even on metal foils or polymer foils.  QE of 85 % @ 550 nm is reported.   An attractive feature of this technology is the option to change the organic material and in this way the sensor can be optimized for a particular wavelength region. 

Applications for this technology :

          near-infrared imagers with polymers, by changing the bandgap of the light absorber (changing material) from 1.9 eV to 1.46 eV,

          flat panel X-ray imaging, a prototype of 256 x 256 pixels is shown including videos to demonstrate the capabilities of the image sensor,

          organic position-sensitive devices, based on a device with only 4 pixels each about 1 cm2,

          thin and optic free light barrier based on these organic photodiodes,

          low bandgap material by doping of PdS quantum dots, the diameter of the quantum dots can be used to optimize the sensitivity of the devices for a particular wavelength,

          integration of organic photodiodes on CMOS backplanes, a feasibility study on a 1.2 Mpixel with 4.8 um pixel size was performed.  The first results were shown.

 

Renato Turchetta : “Large area CMOS image sensors for X-ray and charged particle detection”

The presentation started with an overview of some theory about detection of particles in silicon, and the difference between direct and indirect detection of incoming particles.  Next Renato introduced their INMAPS process, being a CMOS process based on p-substrates with the nMOS transistors in a p-well and the pMOS transistors on an n-well.  Underneath the n-well an extra deep p-well is introduced to protect the n-well of acting as an electron drain.  An example of a sensor designed in this technology is a ToF Mass Spectrometer, 72 x 72 pixel array, 70 um x 70 um pixel size, time resolution < 100 ns, equivalent to more than 10 Mfps.

For integrating sensors, 2 examples were shown :

          medical imaging full field mammo : about A4 size, and chest imaging : about A3 size.  The devices are processed on 200 mm wafers, and are made 3-side buttable, photodiode size is determining the noise level to 30 e, design is just completed : 145 mm x 120 mm, 50 um pixles, 40 fps analog out, 2x and 4x binning,

          TEM sensor in production : 4k x 4k sensor, 4 sensors/wafer,  direct detection of electrons, showing good MTF and DQE, having a good radiation resistance,  0.35 um CMOS, 61 x 63 mm2, ROI and pixel binning.

 

Albert 24-03-2011.

Image Sensors Europe (3)

Thursday, March 24th, 2011

 

Howard Rhodes (Omnivision) : “Second Generation 65 nm OmniBSI CMOS Image Sensors”.

(The presentation of Howard was not available at the moment of presentation, so what you find here is what was learned only from his oral presentation.)

The development of BSI at Omnivision started in 2006 on a 2.2 um pixel, in May 2007 the BSI was applied already in a 0.11 um technology.  The first public announcements of the OmniBSI technology were done in May 2008 (on a 1.75 um pixel) and first mass production started early 2009. 

At this moment OmniVision has about 15 different BSI devices in production, all fabricated on bulk silicon with a p+ back-side passivation.  Why not SOI ?  Because there is no significant improvement in final thickness control, SOI is expensive and the worldwide supply of SOI is limited.  Process flow for the bulk silicon process : p/p+ wafer, frontside processing, wafer bonding, wafer thinning, BSI process, CFA alignment and microlens alignment, bondpad etch. 

A lot of data was shown for 1.4 um and 1.75 um pixels made in the first generation of the BSI technology, this data was already presented earlier at IISW2009 :  SNR10 for 1.4 um : 88 lux, SNR10 for 1.75 um : 53 lux. 

OmniBSI-2 was announced in Feb. 2010 and products are out at this moment, in the mean time OmniBSI-3 is being developed.  OmniBSI-2 is concentrating on a 65 nm Cu CMOS technology node.  Advantages of OmniBSI-2 are : better design rules allowing bigger collection regions in the photodiode (QE, cross-talk, full-well, PRNU, higher SNR), new pixel design improvements, new process modules.  For a 1.75 um pixel the SNR10 is improved to 41 lux with the Omni-BSI-2 process, for a 1.4 um pixel the SNR10 is improved to 58.5 lux.  The SNR10 for the 1.1 um was reported to be 117 lux.  OmniBSI-2 is running on 300 mm wafers, whereas the OmniBSI-1 was using 200 mm wafers.

OmniBSI-3 will focus on 40 nm design rules with all kind of improvements for the pixels as well as a new colour filter technology.    

 

York  Haemisch (Philips Digital Photon Counting) : “Fully digital light sensors enabling true photon counting with temporal and spatial resolution”. 

Old fashioned PMTs are being replaced by Silicon PMTs, based on avalanche PD biased in Geiger mode.  Status of the SiPMT work at Philips : 8 x 8 digital SiPMs with 6400 diodes (cells) per pixel on 11 cm2 area are available, and the components can be tilled in 4 directions.  In a new generation, an FPGA will be added to the system that allows corrections and signal processing as close as possible to the sensor die.  Also this presentation showed a lot of performance data and details about the measurements made. 

So far an outstanding timing resolution is measured (ideal for ToF), a lower dark count level than analog systems is reported and the systems are very robust (sensitivity does hardly change with temperature, insensitivity to electromagnetic interference).  First application field is nuclear imaging for medical applications, next other medical imaging can/will be targeted and ultimately analytical instrumentation as well.

 

Albert 24-03-2011.

Image Sensors Europe (2)

Wednesday, March 23rd, 2011

 

Avi Strum about : “High end image sensors challenges”.  Avi mentioned the demands for the high-end application imagers, being :

          Sensitivity, over 15 stops for digital cinema,

          Dynamic range, up to 120 dB for automotive and digital cinema,

          Frame rate driven by the need of multiple exposures,

          High resolution driven by smaller pixels,

          Functionality to allow in-pixel computational analysis,

          Angular response driven by the application of DSLRs.

The answer to all these challenges is BSI.  Avi spent most of its time to the explanation of the BSI process developed by TowerJAZZ (in combination with a partner SOITEC).  Main challenges seen are :

          the back-thinning of the silicon (using SOI and stopping the etch on the BOX),

          the alignment on the backside for post processing (deep trenched alignment markers),

          suppression of the dark current (perfect interface with smart cut of SOITEC and buried AR layers).

According to the speaker, this process will be more expensive than others, but it will have outstanding image performances compared to bulk BSI CMOS.  It is expected that the process will become available to customers in the beginning of 2012. 

During Q&A it was learned that the process will run on 200 mm wafers and will be stitchable compatible. 

 

Next was Shorab Yaghmai (Aptina) : “Technology trends and market opportunities for image sensors for automotive applications”.  One of the key requirements for automotive is the dynamic range.  In one of their first automotive products Aptina tried to obtain a larger dynamic range by means of multiple exposure.  Advantages are no added pixel circuitry, CDS readout and no reduction in full well.  Disadvantages :  on-chip line buffers needed and 3x faster readout needed.  Another method for WDR is based on the lateral overflow with soft TX pulse : the latter modulates the full well capacity of the pinned photodiode during the exposure period.  Both techniques have their own pros and cons.  Aptina developed dedicated postprocessing of the images to cope with additional blue blur (due to blooming in the different colours and different exposure cycles) and motion blur.

Another challenge for automotive is the low light, low noise imaging needs.  Improvement techniques in this field are dual conversion gain pixels, pixel-wise gain selection, low noise analogue design (who is not willing to have this ?), digital CDS, digital FPN measurement and correction schemes, randomizing readout channels (thank you Martijn Snoeij !?), isolation of analogue and digital circuitry.

All these techniques are applied in new Aptina devices : 1Mpixel with 120 dB dynamic range with compressed and uncompressed data out.

 

“High-Speed Imaging”, by Thomas Baechler (CSEM).

Started with a nice historic overview of high-speed camera, starting from Albert Londe’s camera in 1893 with 12 images in 0.1 s, to a frame rate of 16 Mimages/s by G. Etoh.  Biggest issue these days in continuous shooting at high frame rates is the data rate to read all pixels at ultra-high speeds.  For this reason data reduction is mandatory.  A few possible data reduction approaches were discussed : contrast images, delta images, adaptive ROI, optical correlation and spectral methods.  An example of optical coherence tomography was shown with an intelligent data reduction available, for a camera which normally needed a frame rate of 400 kHz.  The frame rate is reduced with pixel level demodulation in a kind of smart pixel.  The pixels are 40 um in size, and contain 1 PD and 30 transistors and capacitors per pixel.  This design reduces the data rate by a factor of 100 (or even more) of the original requirement.

 

Hendrik Malm (Nocturnal Vision) : “Novel technology for nocturnal vision applications”.  This paper primarily deals with image processing on data that is generated by an existing video camera.  The data is processed in a combined contrast enhancement, image stabilization and noise reduction in a novel way : spatiotemporal intensity averaging with sharpening.  The talk was supported by video shots to illustrate each step of the processing.  In principle the idea comes down to the use of multiple images (minimum 7 frames in the time domain, 9 x 9 kernels in spatial domain) from a video stream to improve every individual image.  The whole concept is inspired by animal vision.  The algorithm can be implemented in a real-time processor.

 

Peter Centen (Grass Valley) : “Advances in image sensor technology for broadcast cameras”. 

Broadcast has to deal with the following specs : illumination range 0.1 lux … 2000 lux … 100000 lux, colour temperature 1800K … 3200K … 20000K, gain -6dB … 18 dB … 40 dB, video up to 60 fr/s, modulated light sources (fluorescent).  On the other hand, broadcast is full of existing standards that needs to be obeyed because cameras have to be synchronized by each other, but also by the studio and the broadcasting system.  Next, the SNR measurement is clearly described how to perform. 

Advances in CCD technology reported for broadcast applications : image diagonal remained fixed, light conditions remained fixed, video bandwidth and pixel size increased, overall 15 dB in noise and sensitivity is gained over the last 20 years.  For instance : read noise from 30 e- (5 MHz) to 8 e- (30 MHz).

Broadcast is a low volume, high performance market : difficult to find a CMOS imager, as a result Thomson started his own development.  Future of broadcast sensors : imager with additional features, single image for 3D, HDR live video, high speed.

During Q&A Peter compared CCDs with CMOS for his application : pros of the CCD are the absence of any row noise, absence of any column noise; pros of the CMOS are the low temporal noise due to the parallel processing on column level, single supply voltage and the plug-and-play way of operation. 

Albert 23-03-2011.

Image Sensors Europe (1)

Wednesday, March 23rd, 2011

Tsutomu Haruta (Sony) discussed “A brief history of image sensor design and development”.  Main focus was the recently developed 16 Mpixel BSI-CMOS with 1.12 um pixels and a 2 x 4 sharing concept.  This sensor is the very first one by Sony with such a complex sharing pixel architecture.  On the average, the pixel has 1.375 T/pixel.

Another new SNR parameter was introduced : SNR per unit area : 20 lux, F2.8, 66.7 ms exposure time, SNR = 19 dB per unit cell and SNR = 21 dB per unit area.  By means of the SNR per unit area, it could be shown that the new 1.12 um technology is performing at least in the same way as the 1.40 um pixel.  But because of the size reduction of the pixel, the SNR is going down by 2 dB.

Haruta showed that the speed of the CMOS image sensors is increasing by a factor 2 every 3 years, and this will continue for the coming generations of sensors.  But on the other hand, the power dissipation for CMOS imaging systems in mobile imaging cannot go higher than 300 mW.  So this is an interesting challenge.  It was not mentioned how Sony will solve this problem …

Albert, 23-03-2011