BSI Foundry Available ?!

July 11th, 2011

While travelling abroad I was surprised to found out that there exists a special BSI fab.  The proof is shown below !  Does this open the BSI technology for the whole wide world ?

110711_blog_1

Albert, 11-07-2011.

Birthday Party

June 20th, 2011

Last week Friday Caeleste celebrated its 5th anniversary with a Scientific Seminar.   Bart Dierickx invited 6 speakers in the domain of X-ray vision, radiation tolerance, particle detections.  These where:
 
– Evi Bongaers (SkyScan, Kontich, Belgium), “The use of micro-CT in material science”
– Jeroen Hostens (SkyScan, Kontich, Belgium) on “Micro-CT in biological science”
– Albert Theuwissen (undersigned), on “Cosmic radiation damage in image sensors”.  (I promised myself that this was going to be the last time I was giving this presentation.)
– Erik Heijne (CERN, Geneva, Switzerland), on “From microscope to attoscope: silicon eyes at the CERN LHC”
– Pablo Fajardo (E.S.R.F., Grenoble, France), on “Detectors for high energy X-ray synchrotron radiation applications”
– Claire Bourgain (V.U.B., UZ-Brussel, Belgium), on “The diagnostic relevance of color X-ray”.  It seems to be really true that there is color information in X-rays.  It helps to better interpret X-ray mammograms.
After the Seminar we enjoyed a good barbecue on the roof of Caeleste’s office.

Congratulations to Bart and his team.  Hopefully we seen each other again on the next birdthday party.

Albert 20-06-2011

Day 4 of the International Image Sensor Workshop in Hokkaido, Japan

June 14th, 2011

The last day of the workshop contained only two sessions : the first one on global shutter pixels and the second one on high-speed and ADCs.

It is clear that a lot of effort is done in the direction of global shutter pixels.  The rolling shutters of the CMOS imagers is still a disadvantages compared to the global shutter of the CCDs.  Nevertheless the performance of the global shuttered CMOS pixels is constantly improving.  The world’s first global shutter in combination with back-side illumination was presented.  The pixel is an 8-transistor cell and to realize the global shutter, the information is stored in the voltage domain on an external-to-the-silicon capacitor.  This leads to a very high shutter efficiency of 1/110,000.  This presentation did perfectly fit into the scope of the workshop, being : discussing work in progress.  Although the shutter efficiency is already pretty high, the back-side technology presented needs further improvement.  But is should be encouraged that people are willing to show their results in a very early stage of their projects.  This leads to interesting discussions ! 

Completely opposite to-work-in-progress was the next presentation based on a product of a 1.2 Mpixel global shutter sensor with automatic gain selection.  This pixel makes use of 5 transistors and the in-pixel storage node is realized as a MOS-capacitor with storage in the charge domain.  The shutter efficiency is not as good as the one reported in the previous work, but on the other hand, the pixel is also much smaller, being 3.75 µm.  It was claimed that this is the smallest pixel in the industry with a global shutter. 

Also interesting was the presentation in which a high-speed column-parallel CMOS image sensor was developed with a SA-ADC based on the PTC.  First time I see the “PTC” in the definition/name of an ADC.  A similar idea was also implemented by one of my PhD students : apply a very fine ADC step when needed (= low-light levels) and allow a very coarse ADC step when allowed (= noise dominated by photon shot noise).  The ADC is capable of resolving 16 bits at the lowest segment with a conversion time of only 2.475 µs.  Papers on multiple windowing, global pipelined shutter CMOS devices, 2-stage pipelined cyclic ADCs  followed.   Apparently dividing the ADC workload in multiple steps seems to be the way to go for high-speed applications.  This was also described in the last two papers of the workshop.

Overall this was a great workshop !  An high-level technical program, superb organization and great service.  Thanks very much to our Japanese friends who were responsible for the organization of the workshop under extremely difficult conditions.  CONGRATULATIONS and THANKS VERY MUCH to Nobu, Junichi, Shoji and all others that contributed to the success of the workshop !

Tomorrow no more news.

Albert, June 14th, 2011

Day 3 of the International Image Sensor Workshop in Hokkaido, Japan

June 12th, 2011

Peter Seitz (CSEM) opened with his invited talk on “Single Photon Imaging”.  He started with giving us a definition of single photon detection.  Looks very straight forward, but apparently it is not because together with the detection of incoming photons,

          you may miss some of them and/or,

          you think you detect a photon but in reality the electron detected may be generated through dark current.

Next he gave a very nice overview of several techniques that can be used for single photon detection, including good old vacuum tubes, hybrid solutions with vacuum and solid-state state, and finally all kind of solid-state devices.  All these techniques look pretty familiar, but if you see them all gathered on one sheet you are surprised to see how much work has been done in this field, but apparently (and for us fortunately) the holy grail has not yet been found. 

Another very informative part of the talk was the link between dark current and light sensitivity.  To detect light a certain bandgap for our semiconducting material is needed, to make the sensors more sensitive to longer wavelengths a smaller bandgap is a necessity, and the latter will also increase the dark current.  Something we probably all know, but it was the first time to have this seen in a graph of dark current versus bandgap energy (ideal curve together with published data).

At the end of the talk it was all clear that there does not exist a single solution for single photon detection in all applications.  The talk concluded with a flow chart for the selection of the appropriate photosensor technology with single-photon resolution, depending on parameters and specification of the application.

Then it was SPAD-time !  Several papers showed new device structures and new technologies to overcome the classical drawbacks of SPADs, being for instance their limited quantum efficiency (pretty low in the red part of the spectrum) and their limited fill factor (due to guards and circuits in every pixel).  It is clear that SPADs are rapidly improving as well as expanding their application field.  On the other hand, all papers presented came from European academics (one with ST’s support).  So when will the big imaging companies jump on the SPADs ?

After the SPAD the real big guys showed up : sensors of multiple cm2 instead of mm2.  A few examples of silicon tiles (these can no longer be called dies), all CMOS, are :

          20.2 x 20.5 mm2, 300 mm wafer-sized, monochrome,

          23 x 25.9 cm2, 4 sensors butted with very small butting gaps, RGB,

          61 mm x 63 mm, for electron detection.

All these huge sensors are making use of stitching technology to make silicon devices that are much larger than the stepper’s reticle size.  Stitching seems to be common practice these days and even available at the CMOS foundries.  Apparently everyone is already that familiar with stitching that no one is any longer referring to the original work in the field of stitching for imagers …

The technical part of the day ended with two interesting papers on medical topics :

          a first attempt to do colour imaging for X-rays (energy detection), and

          single grain TFTs and photodiodes intended for large area X-ray detectors.

During the workshop banquet three important announcement were made :

          the best poster award was won by Mikio Ihama and co-workers from FujiFilm for the poster : “CMOS Image Sensor with an Overlaid Organic Photoelectric Conversion Layer : Optical Advantages of Capturing Slanting Rays of Light”.

          a newly established Exceptional Service Award was presented to Vladimir Koifmann for the creation and the editorship of the Image Sensors World blog,

          the Walter Kosonocky Award for best paper published in 2009 and 2010 was handed out to Hayato Wakabayashi and his co-authors from Sony Corporation for the paper entitled : “A ½.3 inch 10.3 Mpixel 50 frames/s Back-Illuminated CMOS Image Sensor”.  This work was presented at the 2010 IEEE Internatinal Solid-State Circuits Conference.

Many congratulations to the winners of the three awards.  Hopefully our community keeps up the excellent work and will be able to publish their results obtained.

Tomorrow more news.

Albert, June 12th, 2011

Day 2 of the International Image Sensor Workshop in Hokkaido, Japan

June 11th, 2011

Day 2 started with a session on Time-of-Flight sensors.  ToF seems to be the logical choice for depth sensing, but also for fluorescence detection.  The session kicked-off with a kind of overview paper presented by Robert Henderson (although he did not contribute to the paper, he gave an excellent talk!).  In this overview three different techniques to perform ToF were compared :

          Buried channel demodulator, which suffers from the fact that a special technology is needed,

          Current assisted photonic demodulator, which suffers from power consumption, but was mentioned to be the best choice for low cost applications,

          The pixel with in-pixel switched cap circuitry, suffering from its complex pixel structure but seems to be the preferred choice in the case of industrial and/or security applications.

Another interesting observation with ToF is their move towards pinned photodiode pixels as well.  Apparently most foundries that supply the silicon for these ToF, all have pinned photodiodes available these days.  In other cases where still photogates are used, the photogates can be biased negative to get the same accumulation at the interface and to lower the dark issues.

A very nice invited talk came from Masatoshi Ishikawa : “New application areas made possible by high speed vision”.  The talk had a lot in common with the one he gave last year at the ISSCC forum.  But it was still great to hear it again, also because of the great movies to highlight the capabilities of custom designed high-speed sensors.  In many high-speed applications regular devices are used in combination with very highly sophisticated algorithms.  But the lesson of M. Ishikawa is the following : exchange spatial resolution for temporal resolution and the algorithms will become much more simple.  For normal sized robots a frame rate of at least 1000 fr/s are needed, for micro-machined type of robots 10,000 fr/s is a must.

In one of the papers the use of ToF with a standard CMOS 5T pixel was illustrated.  The sensor originally was not intended for this application, but nevertheless, it is possible to detect depth.  The results are not up to the same quality level as the specialized ToF, but a bit more optimization of the sensor and/or timing could lead to better results.  A very interesting side-effect was mentioned during the Q&A : the ToF mode of this device can be used to measure the delays in the sensor’s metal wiring.

Dark current, dark current  non-uniformities, hot pixels, they all seem to be a joy for ever !  Of course we all knew the positive influence of having an accumulation layer at the interface, but a couple of papers illustrated how you can do this dynamically in CCDs as well as with the transfer gate of the CMOS devices.  This is absolutely the way to go if the sensors need to show dark currents close to their theoretical limit.  Although, it was mentioned in one of the papers that over time, a negative bias of the transfer gate can introduce some nasty aging effects.

Day 2 ended with a 25th anniversary talk of the workshop given by the father of the workshop himself : Eric Fossum.  The talk started with the statement of Bernard of Chartres (1115 AD) “We are like dwarfs on the shoulders of giants, so that we can see more than they, and things at a greater distance, not by virtue of any sharpness of sight on our part, or any physical distinction, but because we are carried high and raised up by their giant size”.  This was a quite nice start to acknowledge the contribution of several great solid-state imaging pioneers that prepared the path on which all of us are walking these days.  The older images and papers that Eric showed brought back good memories of several previous workshops.

Tomorrow more news.

Albert, June 11th, 2011

Day 1 of the International Image Sensor Workshop in Hokkaido, Japan

June 10th, 2011

This Wednesday the IISW 2011 started in Japan.  After all the issues that happened on March 11th, several people cancelled their contribution and/or trip to Japan, but nevertheless, the technical program of the first day was more than packed.  It is impossible to report about all the papers, also because the afternoon program contained about 30 flash presentations that go together with the poster session.

In the morning session the pixel shrinkage was a kind of main topic.  What could be learned from the presentations and the publications ?

          All companies are moving towards a kind of common performance level : noise, dark current, QE, SNR of the various technologies seem to converge, although the different companies are using different technologies,

          CMOS image sensors are deviating more and more from the standard CMOS technology.  In the early days of CMOS it was a strong argument that CMOS could be made in standard, cheap CMOS technologies, but this argument no longer holds.  Another consequence is the second source option for CMOS.  Because everyone is relying on its own technologies, one can forget about second sourcing (except maybe for second sourcing within the same company),

          Over the last couple of years BSI became an important technology for the CMOS devices, but as usual, a new technology also boosts the developments in the old technology.  It is quite remarkable that several companies tell at the workshop about their light guiding technology that was used in the 1.45 mm pixel FI devices.  In this way they could keep up the lightsensitivity in these small FI pixels, even in some 1.1 mm pixels.  But with the step towards sub-micron pixel pitches, BI seems to be inevitable. 

There was an interesting invited talk on colour filter technology by Hiroshi Tagushi of FujiFilm.  Quite nice paper containing information on the road map of the filter material.  It seems that the photoresist + colour pigment is no longer an option for sub-micron pixels, because simply, there is no space anymore for the lightsensitive resist component in the filter material.  For that reason sub-micron pixels with colour filters will also need a classical lithostep for the filter definition, in combination with a reactive ion etching technology.

Day one ended with the poster session.  Every poster presenter gets a few minutes to introduce his/her work during a flash presentation.  Within 2 hours the amount of information thrown on the audience is incredibly large.  Not just because of the quantity but also because of the quality of the work.  It should be remarked that over the years the overall quality of the content as well as of the presentations given at the workshop has enormously grown .  It clearly demonstrates the importance of this workshop.  Tomorrow more news.

Albert, June 10th, 2011.

Second Course “Hands-On Evaluation”

May 18th, 2011

Last week I taught for the second time the new course “Hands-On Evaluation of Image Sensors”.  The course location was Copenhagen, and the organization was in the hands of CEI-Europe.  As I reported a while ago about the first edition of this course, there were several items that could be improved.  The main change compared to the previous edition of the course was the availability of extra software functions/tools that could be used during the measurements/calculation.  So the participants could focus more on the interpretation of the data.  If I compare the two editions of the course, I think that the availability of the functions was really a step forward.  I also can confirm this by the timing of the course.  The participants had their measurements results available much quicker than in the first course. 

I also went through a cycle of further updating and optimizing the course material, this is always necessary after the first edition(s) of a course.  Based on the questions and remarks from the participants during the first course, I learned about the quality of my own sheets.  Also during the second version of the training, less errors and typos could be found in the material, and I had no complains about the course notes.  Apparently the extra work increased the quality of the training in general and of the course notes in particular.

The next “Hands-On Evaluation” class is scheduled for November 2011 in Dresden (see also www.cei.se).  What will be further improved for the third edition ?  I am thinking about updating the discussion and measurement on QE and MTF.  During the first two editions I just showed how to perform the measurements and showed the results obtained.  Next time we will perform the measurements in the class and go through the algorithms and mathematics needed to process the raw measurement data.  The set-ups for the QE and MTF measurements are pretty large, so it will not be possible to allow all participants to perform their own experiments.  For that reason, QE and MTF will be measured in a plenary session.  But all other sensor parameters will be characterized by the participants themselves.  That part of the training remains unchanged !  Maybe we see each other in Dresden ??

Albert, 17-05-2011.

Image Sensors Europe (4)

March 25th, 2011

 

Stephane Laveau (DxO) : “Colour shading : root causes and review of solutions”.

This presentation started with showing images of the effect in existing (high-end) mobile phone cameras.  Where does colour shading come from ?  Main causes can be found in :

          IR filter (IR cut is depending on the angle of incidence, thus spectrum varies with light angle),

          microlenses (creating optical crosstalk),

          colour filters,

          optical stack.

Factors that make colour shading more complicated to model and to correct :

          light sources, they come in various flavours,

          manufacturing tolerances resulting in different IR cut off wavelengths.

Correction methods :

          calibration per unit, but is very expensive,

          in the camera by means of white balance adjustment, but this has a limited application in the case of fluorescent or mixed lighting.

Adaptive correction : statistics will be obtained from a single image by looking for changes in hue.  Examples of this adaptive correction were shown to illustrate the improvements of the method.

 

Yang Ni (New Imaging Technologies) : “MAGIC : Logarithmic Image Sensing using photodiode in solar cell mode”. 

A classical method of doing wide dynamic range is the use of logarithmic pixels based on a transistor operating in weak inversion.  This solution is giving quite a bit of FPN, is suffering from a low sensitivity and shows a large image lag.  NIT has changed the concept of the logarithmic pixel : the photodiode is used in the solar cell mode, and the voltage across the PD is a logarithmic function of the current generated by light input.  Advantages : physically exact dark reference (no light input means 0 V across the diode), on-chip FPN correction, high sensitivity, no image lag.  The noise in this logarithmic mode pixel is equivalent to the noise of a 3T pixel, being the classical kTC noise (around 500 uV).  This noise level is constant in the logarithmic mode.  It does limit the dark performance of the sensor, but does not limit the dynamic range of the sensor.

A 768 x 576 is developed with 5.6 um pixel design, in 0.18 um CMOS process, fill factor is larger than 70 % (no micro-lenses), and the light detection threshold is smaller than 7 mlux @ 25 fps.  Good colour rendition can be maintained over 120 dB variation in light input, no white balance is needed, neither any tone mapping is needed.  Impressive videos were shown of the performance of the devices in a lab environment.

 

Sandro Tedde (Siemens) : ”Organic and hybrid photodiodes for active matrix imaging”.

Fabrication of the devices is based on a spray-coating technique on top of glass or even on metal foils or polymer foils.  QE of 85 % @ 550 nm is reported.   An attractive feature of this technology is the option to change the organic material and in this way the sensor can be optimized for a particular wavelength region. 

Applications for this technology :

          near-infrared imagers with polymers, by changing the bandgap of the light absorber (changing material) from 1.9 eV to 1.46 eV,

          flat panel X-ray imaging, a prototype of 256 x 256 pixels is shown including videos to demonstrate the capabilities of the image sensor,

          organic position-sensitive devices, based on a device with only 4 pixels each about 1 cm2,

          thin and optic free light barrier based on these organic photodiodes,

          low bandgap material by doping of PdS quantum dots, the diameter of the quantum dots can be used to optimize the sensitivity of the devices for a particular wavelength,

          integration of organic photodiodes on CMOS backplanes, a feasibility study on a 1.2 Mpixel with 4.8 um pixel size was performed.  The first results were shown.

 

Renato Turchetta : “Large area CMOS image sensors for X-ray and charged particle detection”

The presentation started with an overview of some theory about detection of particles in silicon, and the difference between direct and indirect detection of incoming particles.  Next Renato introduced their INMAPS process, being a CMOS process based on p-substrates with the nMOS transistors in a p-well and the pMOS transistors on an n-well.  Underneath the n-well an extra deep p-well is introduced to protect the n-well of acting as an electron drain.  An example of a sensor designed in this technology is a ToF Mass Spectrometer, 72 x 72 pixel array, 70 um x 70 um pixel size, time resolution < 100 ns, equivalent to more than 10 Mfps.

For integrating sensors, 2 examples were shown :

          medical imaging full field mammo : about A4 size, and chest imaging : about A3 size.  The devices are processed on 200 mm wafers, and are made 3-side buttable, photodiode size is determining the noise level to 30 e, design is just completed : 145 mm x 120 mm, 50 um pixles, 40 fps analog out, 2x and 4x binning,

          TEM sensor in production : 4k x 4k sensor, 4 sensors/wafer,  direct detection of electrons, showing good MTF and DQE, having a good radiation resistance,  0.35 um CMOS, 61 x 63 mm2, ROI and pixel binning.

 

Albert 24-03-2011.

Image Sensors Europe (3)

March 24th, 2011

 

Howard Rhodes (Omnivision) : “Second Generation 65 nm OmniBSI CMOS Image Sensors”.

(The presentation of Howard was not available at the moment of presentation, so what you find here is what was learned only from his oral presentation.)

The development of BSI at Omnivision started in 2006 on a 2.2 um pixel, in May 2007 the BSI was applied already in a 0.11 um technology.  The first public announcements of the OmniBSI technology were done in May 2008 (on a 1.75 um pixel) and first mass production started early 2009. 

At this moment OmniVision has about 15 different BSI devices in production, all fabricated on bulk silicon with a p+ back-side passivation.  Why not SOI ?  Because there is no significant improvement in final thickness control, SOI is expensive and the worldwide supply of SOI is limited.  Process flow for the bulk silicon process : p/p+ wafer, frontside processing, wafer bonding, wafer thinning, BSI process, CFA alignment and microlens alignment, bondpad etch. 

A lot of data was shown for 1.4 um and 1.75 um pixels made in the first generation of the BSI technology, this data was already presented earlier at IISW2009 :  SNR10 for 1.4 um : 88 lux, SNR10 for 1.75 um : 53 lux. 

OmniBSI-2 was announced in Feb. 2010 and products are out at this moment, in the mean time OmniBSI-3 is being developed.  OmniBSI-2 is concentrating on a 65 nm Cu CMOS technology node.  Advantages of OmniBSI-2 are : better design rules allowing bigger collection regions in the photodiode (QE, cross-talk, full-well, PRNU, higher SNR), new pixel design improvements, new process modules.  For a 1.75 um pixel the SNR10 is improved to 41 lux with the Omni-BSI-2 process, for a 1.4 um pixel the SNR10 is improved to 58.5 lux.  The SNR10 for the 1.1 um was reported to be 117 lux.  OmniBSI-2 is running on 300 mm wafers, whereas the OmniBSI-1 was using 200 mm wafers.

OmniBSI-3 will focus on 40 nm design rules with all kind of improvements for the pixels as well as a new colour filter technology.    

 

York  Haemisch (Philips Digital Photon Counting) : “Fully digital light sensors enabling true photon counting with temporal and spatial resolution”. 

Old fashioned PMTs are being replaced by Silicon PMTs, based on avalanche PD biased in Geiger mode.  Status of the SiPMT work at Philips : 8 x 8 digital SiPMs with 6400 diodes (cells) per pixel on 11 cm2 area are available, and the components can be tilled in 4 directions.  In a new generation, an FPGA will be added to the system that allows corrections and signal processing as close as possible to the sensor die.  Also this presentation showed a lot of performance data and details about the measurements made. 

So far an outstanding timing resolution is measured (ideal for ToF), a lower dark count level than analog systems is reported and the systems are very robust (sensitivity does hardly change with temperature, insensitivity to electromagnetic interference).  First application field is nuclear imaging for medical applications, next other medical imaging can/will be targeted and ultimately analytical instrumentation as well.

 

Albert 24-03-2011.

Image Sensors Europe (2)

March 23rd, 2011

 

Avi Strum about : “High end image sensors challenges”.  Avi mentioned the demands for the high-end application imagers, being :

          Sensitivity, over 15 stops for digital cinema,

          Dynamic range, up to 120 dB for automotive and digital cinema,

          Frame rate driven by the need of multiple exposures,

          High resolution driven by smaller pixels,

          Functionality to allow in-pixel computational analysis,

          Angular response driven by the application of DSLRs.

The answer to all these challenges is BSI.  Avi spent most of its time to the explanation of the BSI process developed by TowerJAZZ (in combination with a partner SOITEC).  Main challenges seen are :

          the back-thinning of the silicon (using SOI and stopping the etch on the BOX),

          the alignment on the backside for post processing (deep trenched alignment markers),

          suppression of the dark current (perfect interface with smart cut of SOITEC and buried AR layers).

According to the speaker, this process will be more expensive than others, but it will have outstanding image performances compared to bulk BSI CMOS.  It is expected that the process will become available to customers in the beginning of 2012. 

During Q&A it was learned that the process will run on 200 mm wafers and will be stitchable compatible. 

 

Next was Shorab Yaghmai (Aptina) : “Technology trends and market opportunities for image sensors for automotive applications”.  One of the key requirements for automotive is the dynamic range.  In one of their first automotive products Aptina tried to obtain a larger dynamic range by means of multiple exposure.  Advantages are no added pixel circuitry, CDS readout and no reduction in full well.  Disadvantages :  on-chip line buffers needed and 3x faster readout needed.  Another method for WDR is based on the lateral overflow with soft TX pulse : the latter modulates the full well capacity of the pinned photodiode during the exposure period.  Both techniques have their own pros and cons.  Aptina developed dedicated postprocessing of the images to cope with additional blue blur (due to blooming in the different colours and different exposure cycles) and motion blur.

Another challenge for automotive is the low light, low noise imaging needs.  Improvement techniques in this field are dual conversion gain pixels, pixel-wise gain selection, low noise analogue design (who is not willing to have this ?), digital CDS, digital FPN measurement and correction schemes, randomizing readout channels (thank you Martijn Snoeij !?), isolation of analogue and digital circuitry.

All these techniques are applied in new Aptina devices : 1Mpixel with 120 dB dynamic range with compressed and uncompressed data out.

 

“High-Speed Imaging”, by Thomas Baechler (CSEM).

Started with a nice historic overview of high-speed camera, starting from Albert Londe’s camera in 1893 with 12 images in 0.1 s, to a frame rate of 16 Mimages/s by G. Etoh.  Biggest issue these days in continuous shooting at high frame rates is the data rate to read all pixels at ultra-high speeds.  For this reason data reduction is mandatory.  A few possible data reduction approaches were discussed : contrast images, delta images, adaptive ROI, optical correlation and spectral methods.  An example of optical coherence tomography was shown with an intelligent data reduction available, for a camera which normally needed a frame rate of 400 kHz.  The frame rate is reduced with pixel level demodulation in a kind of smart pixel.  The pixels are 40 um in size, and contain 1 PD and 30 transistors and capacitors per pixel.  This design reduces the data rate by a factor of 100 (or even more) of the original requirement.

 

Hendrik Malm (Nocturnal Vision) : “Novel technology for nocturnal vision applications”.  This paper primarily deals with image processing on data that is generated by an existing video camera.  The data is processed in a combined contrast enhancement, image stabilization and noise reduction in a novel way : spatiotemporal intensity averaging with sharpening.  The talk was supported by video shots to illustrate each step of the processing.  In principle the idea comes down to the use of multiple images (minimum 7 frames in the time domain, 9 x 9 kernels in spatial domain) from a video stream to improve every individual image.  The whole concept is inspired by animal vision.  The algorithm can be implemented in a real-time processor.

 

Peter Centen (Grass Valley) : “Advances in image sensor technology for broadcast cameras”. 

Broadcast has to deal with the following specs : illumination range 0.1 lux … 2000 lux … 100000 lux, colour temperature 1800K … 3200K … 20000K, gain -6dB … 18 dB … 40 dB, video up to 60 fr/s, modulated light sources (fluorescent).  On the other hand, broadcast is full of existing standards that needs to be obeyed because cameras have to be synchronized by each other, but also by the studio and the broadcasting system.  Next, the SNR measurement is clearly described how to perform. 

Advances in CCD technology reported for broadcast applications : image diagonal remained fixed, light conditions remained fixed, video bandwidth and pixel size increased, overall 15 dB in noise and sensitivity is gained over the last 20 years.  For instance : read noise from 30 e- (5 MHz) to 8 e- (30 MHz).

Broadcast is a low volume, high performance market : difficult to find a CMOS imager, as a result Thomson started his own development.  Future of broadcast sensors : imager with additional features, single image for 3D, HDR live video, high speed.

During Q&A Peter compared CCDs with CMOS for his application : pros of the CCD are the absence of any row noise, absence of any column noise; pros of the CMOS are the low temporal noise due to the parallel processing on column level, single supply voltage and the plug-and-play way of operation. 

Albert 23-03-2011.