Archive for March, 2012

Report on Image Sensors Europe 2012 (4)

Monday, March 26th, 2012

In this fourth and last report about ISE 2012 I would like to give some more details about 2 presentations.

Firstly the Sony presentation given by Eiichi Funatsu on Small Pixel Technology.  The presenter posted a couple of questions on the screen (when moving from a 1.4 um pixel 8M pixel to a 1.12 um pixel 13M pixel sensor) :

1) Is a 1.12 um pixel giving an acceptable S/N ?  It was shown in the presentation that moving to a flat technology (no STI, no LOCOS) is really improving the dark performance.  Also illustrated was the improvement of the angular response when moving to BSI technology.  So the question could be answered with a YES : the S/N will be acceptable due to new developments and new improvements of the existing technology.

2) Is it really possible to resolve 1.12 um ?  A higher pixel count on the sensor gives the observer a PERCEIVED larger S/N ratio.  For that reason Sony introduces a new parameter, being the Perceived Noise Value.  The method is relying on the resolution characteristics of the human visual system to come up with a quantative indication for Perceived Noise Value.  (More details : Proc. Autumn Meeting of the Society of Photographic Science and Technology of Japan, Dec. 2011, pp. 12-13.)  It was shown that although the 1.12 um pixel showed a worse noise performance, the perceived quality of the image was better than the one obtained by the 1.4 um pixel.

3) Is it possible to use the high resoluting for something else ?  Yes, to extend the dynamic range by the implementation of RGBW colour filters.  This is not new (the presenter mentioned that some other company was doing this already, he did not mention which other company ?!), but what is new is the fact that Sony reconverts the RGBW data in Bayer RGB data.  In this way they still can rely on the existing ISP hard- and software.  Simple, but cleaver idea.

The closing presentation of ISE 2012 was given by Eric Fossum.  It was remarkably that many people remained present at the conference to hear Eric giving his talk.  I have to admit that I had heard this talk already before (it was even posted on YouTube a few months ago), but it is still a pleasure to hear and see the original presentation delivered live by the original author in front of an audience.  Eric has a great style of presenting his material and to bring his ideas about the QIS (Quanta Image Sensor) across.  A few months ago he mentioned that he had a lot of great ideas on the implementation of the QIS, but that he had no money to explore his ideas.  This time he mentioned that he has several PhD students who are going to help him in his research.  So a major step forward !  

Eric Fossum started with a short overview of CCD and CMOS technology, highlighted the short comings of the existing technologies and introduced the QIS which can solve a lot of the problems.  Very wel structured presentation !

Overall the ISE 2012 was a great conference.  Many people gave very positive feedback about the program, the speakers and the content of the presentations.  In this blog only reports are given about the technical presentations and about the contributions on image sensors.  This does not mean that the other presentations that are not highlighted in this blog were of less quality !

Thanks and congratulations to the organizers, and especially to the speakers.  Great job !

Albert, 26-03-2012

Report on Image Sensors Europe 2012 (3)

Friday, March 23rd, 2012

This time a bomb of technical data and numbers is thrown out !

First one is coming from the presentation of Hiroshi Shimamoto (NHK).  Their ultra-high definition camera was presented with the sensor having the following specs :

– 7680 x 4320 pixels, 2.8 um x 2.8 um pixel size, 120 fr/s, 12bit resolution ADC, prgressive scan, 8000 ADCs on-chip, conversion tims < 1.92 us, power < 3W, noise < 6 electrons, 96 LVDS outputs in prallel, 0.18 um technology, 1P4M, PPD pixels, 3/2″ optical format, 61.3 dB dynamic range , 0.66 V/lux.s sensitivity, full well < 10,000 electrons. 

The key component of the chip is the column-parallel two-stage pipelined cyclic ADC, being an optimum compromise between speed, power and area.  The ADC was already extentively highlighted during ISSCC2012.  The complete readout of 1 line of information takes up 4 line times : 1) CDS of the column information, 2) conversion of the first 4 bits, 3) conversion of the 8 last bits, 4) reading off chip the digital data.

Here are more numbers, coming from Renato Turchetta’s talk about the work done at Rutherford Appleton Labs.

– X-ray detector based on a stitched mask set (thanks for giving the right reference !) : 14 um x 14 um pixel size, 4k x 4k pixels, 40 fps, 0.35 um CMOS technology, 3T, radiation hardness till 20 Mrad, analog out, on-chip binning and ROI options,

– wafer scale sensors : 2800 x 2400 pixels, 50 um x 50 um, 32 analog outputs, 40 fps, 139.2 mm x 120 mm, 6.7 Mpixel,

– sensors for synchrotron application : 4k x 4k, 25 um x 25 um, 120 fps, HDR with 1: 100,000 at 500 eV, this is still work in progress.

Besides all the data shown by Renato, he also introduced a very nice (and apparently very well received) idea of growing scintillators in cavities etched in silicon.  In a silicon wafer (225 um thickness) holes are being etched (hexagonal in shape and 25 um in size), next in these holes a scintillator is grown.  In this way the conversion and capture efficiency for the visible photons generated by the scintillator is increased.

Also Mark Downing (ESO) had a lot of numbers to report about a CMOS device that will be used in an adaptive optical system to be placed on the biggest telescope of ESO, being 39.5 m in diameter.  The sensor characteristics :

1760 x 1760 pixels, 24 um x 24 um, 0.18 um technology, 6 metal layers, BSI, 4T pixels, 100 uV/e conversion gain, 4k electrons FW, 3.0 e noise, QE > 90 %, image lag < 0.1 %, Peltier cooled to -10 deg.C, power consumption < 5 W,

The chip has in total 70,000 (seventy thousand, this is not a typo) ADCs to comply with the required speed specificiation.  These 70,000 ADC-SS allow to process 40 lines in parallel and offer the option to work at 700 fps.  88 parallel LVDS channels bring the data to the outside.

Albert, 23-03-2012.

Report on Image Sensors Europe 2012 (2)

Thursday, March 22nd, 2012

Valerie Nguyen (CEA-LETI) opened the show on Wednesday morning.  According to Valerie, the market trend in imaging will be : “Pixels Everywhere”.  If solid-state imaging is going to be used for creating and generating nice images, then the trend will be MORE PIXELS.  But if solid-state imaging is going to be used for control purposes, then the trend will be MORE THAN PIXELS.  Based on existing marketing reports, the year 2009 has delivered 2 M image-sensor wafers with a growth rate of 11 %, and the prediction is that we will get 4 Mwafers in 2015, still with a growth rate of 15 %.  So on the short term, the growth in imaging business is for sure not coming to an end.  But one should realize that in the last 10 years the volume for VGA sensors (as an example) is grown by a factor of 100, but the price went down by a factor of 25 !  A consumer does get more silicon for less money, the manufacturer do get less money for more silicon.

Valerie gave a very nice overview of the imaging activities in the eYe valley around Grenoble.  Apparently a lot of companies in that area are active in solid-state imaging.  She also illustrated what CEA-LETI can do for the imaging industry, such as : thinner BEOL, lightpipes, inner lenses, BSI, colour filter with IR cut-off included, nanoprint for micro-lenses, TSV, etc.  Apparently Grenoble is the place to be for sensor innovations (and maybe also for skiing ?).

Valerie concluded with some statements about the combination of several imaging techniques in one device.  For example micro-bolometers in combination with a visible sensor, or InGaAs underneath Si.  In this way a fusion can be realized between several imaging methods/techniques.  The fusion idea of putting several imaging methods and techniques into one device was mentioned a couple of times by different speakers.

Next was Mats Wernersson (Sony) who convinced the audience that “Mother nature is a bitch”.  He focused on the system aspects of future mobile imaging, and clearly proved that one has to think on system level if one tries to improve the sensor performance.  For instance, low light performance is much more than just sensor sensitivity !  Even if you can count single photons, it is not possible to make nice images with a single photon, but it takes a billion of photons.  Another nice example of the impact on system level of improving the sensor : BSI does allow the use of faster lenses.  So the combination of higher light sensitivity and the possibility of a lower F number is the real benefit of BSI, and this combination is much larger than just the light sensivity increase of the sensor.  Very nice talk with the typical dry humor of Mats.   

Gennadiy Agranov (Aptina) talked about pixel developments.  It is not really new that the industry is going to smaller pixels, but the main challenge is of course to keep up the performance of these tiny small pixels.  Gennadiy showed a lot of interesting data about performance of the devices, for instance QE and dark current of various generations of sensors.  Also worthwhile to mention is the work of Aptina on global shutter CMOS devices.  Over the years they improved the in-pixel storage concept and went from a device with 40 electrons of noise and a QE of 40 % in 2000, to a device with 1.5 electrons of noise and a QE of 85 % three generations later in 2012.  These numbers hold for global shutter devices.  The latter device has a shutter efficiency of 99.96 % (comes close to the transport efficiency of CCDs : 99.99995 %).

In the last part of his presentation, Gennadiy showed the first results about a CMOS device intended for stereo vision.  A single image sensor is used for 3D, based on a pixel with an asymmetric angular response.  Very interesting idea !

Albert, 22-03-2012.

Report on Image Sensors Europe 2012 (1)

Wednesday, March 21st, 2012

Yesterday night, March 20th, the ISE 2012 opened with a keynote of Nobukazu Teranishi of Panasonic.  Nobu is a world-known top-expert in the field of solid-state imaging.  Together with his team at NEC he invented the pinned photodiode and the vertical overflow drain.  But yesteday night he talked about “Dark Current and White Blemishes”.

His talk started with a nice overview of the work Nobu and his team did in the early CCD days to reduce the dark current.  A lot of very basic technology steps were developed at that time to, firstly understand the behaviour of defects in silicon and secondly to optimize the fabrication technology to reduce these defects.  He highlighted all kind of different gettering techniques : internal gettering, external gettering and proximity gettering.  If you listen to the story then it seems all so logical what the various gettering steps can do to reduce the amount of stress in the wafers, to attrack impurities and to reduce the dark current.  But for sure it must have been a work of many manyears to come to the low dark current levels present in today’s devices.

The last part of his talk was about a new dark current generation model for the pinned photodiodes.  Nobu explained that the generation-recombination centers in the p+ top layer and at the interface of the silicon still contribute to the dark current of the PPD.  He supported his theory by means of an analytical model and by applying this model to data published by others.   Ultimately he came to the conclusion that it still is of utmost importance to keep the dark current as low as possible and one way of doing so it to work with devices that have no LOCOS and no STI to isolate transistors, but that all isolation is done by means of implants.  Also this statement seems to be straightforward, but in the CMOS world it appearss to be new, although in the CCD world this technique was already applied a while ago.

Good start of the conference !

Albert, 21-03-2012.

How to Measure : Fixed-Pattern Noise in Light or PRNU (2)

Tuesday, March 13th, 2012

In the previous “How to measure” blog the basic measurement and calculation of the PRNU or Photo-Response Non-Uniformity was discussed. Although the mentioned numbers were not that high, it is always wise to check where the PRNU is coming from. A possible source of non-uniformity with light is shading : a low-frequency component that changes the response of the pixels (e.g. from top to bottom, from left to right, from the middle towards the corners of the sensor). Even if the shading component is small, it can result in (minor) changes in spectral response across the sensor. These type of errors can have a severe effect on colour shading in a colour sensor and can make colour reconstruction pretty complicated. So it absolutely worthwhile to check out the shading under light conditions.

To evaluate the light shading, a similar method can be used as the one applied to evaluate the dark shading. Also the same images are used as before : multiple light images taken at room temperature and at different integration times. To quantify the light shading, the images taken at an exposure time of 80 ms are used : at 80 ms, the average signal is 25 % of the saturation level. Because light shading is the low frequency variation of the signal, the following procedure will be followed :

all images captured at 80 ms integration time are averaged, in this way the temporal noise will be reduced,

the averaged image will be forced through a low-pass filter with a 9×9 filter kernel. This operation will reduce the (high-frequency) FPN.

The result after averaging and filtering is shown in figure 1.

120314_blog_1

Figure 1 : low-frequency variation in light signal at 80 ms integration time.

Clearly visible in figure 1 is the non-uniformity of the light signal :

top left is the shading of the blue pixel ; which looks circular symmetric, with its lowest signal value in the center of the sensor,

top right is the shading of the green pixel in the blue line ; which looks to have an increasing signal towards the top of the sensor,

bottom left is the shading of the green pixel in the red line ; which looks to have a decreasing signal towards the top of the sensor,

bottom right is the shading of the red pixel ; which looks circular symmetric, with its highest signal value in the center of the sensor.

The shading illustrated in figure 1 can be expressed as :

the peak to peak value, equal to 20 DN, 36 DN, 30 DN and 41 DN for the four different colour signals,

maximum value, equal to 1270 DN, 1475 DN, 1467 DN, 1211 DN, and minimum value, equal to 1250 DN, 1439 DN, 1437 DN, 1180DN for the four different colour signals.

In principle these numbers are more or less meaningless without any further “reference”. The additional parameters needed are :

evaluation temperature, being room temperature,

integration time, being 80 ms,

dark signal offset, being 819 DN,

average signal with light without offset correction, being 1260 DN, 1457 DN, 1452 DN and 1195 DN for the four colour channels,

average signal with light and with offset correction, being 441 DN, 638 DN, 633 DN and 376 DN.

Taking these numbers into account, the shading in the four colour is equal to 4.5 %, 5.6 %, 4.7 % and 10.9 % of the average sigal measured in each individual colour channel, and measured at 25 % of saturation. These numbers are not really that low, but take into account that not a standard deviation is mentioned, but a peak-to-peak value or a maximum deviation. Do you prefer to have lower numbers : simply reference the shading numbers to the saturation level and they will become about a factor of 4 lower. That is what is called “specmenship”.

Albert, 13-03-2012.