Continuation of the blog

January 14th, 2011

 

After finishing the long series of PTC stories, I asked for new ideas and suggestions for this blog.  I got several reactions through this blog, through the www.image-sensors-world.blogspot.com as well as through private mail communication.  And the result is that for the time being I will continue for 6 or 7 more blogs around the PTC.  As you should be aware of, I developed a software tool to simulate images and to study the influence of different noise sources and specification parameters on the image quality.  I will use this software tool to illustrate the effect on the PTC analysis of the following parameters :

       Temperature at which the camera is running,

       Number of photons,

       Conversion gain,

       QE specification,

       Number of ADC bits,

       Non-linearity of the readout node and the source-follower.

Because I do not want to repeat over and over all the settings of the sensor and the camera, I will list the “standard” settings here that will be use in all further simulations (unless otherwise specified).  Here they are :

       Sensor size : 640 x 480,

       Number of images generated : 25,

       Conversion gain : 40 µV/e,

       Analog gain : 1,

       Temperature : 30oC,

       Exposure time changing from 0 s to 6.5 s,

       Number of ADC bits : 12,

       Dark current : 200 e/s at 22oC,

       Dark random non-uniformity : 30 e/s at 22oC,

       Dark current doubling temperature : 8oC,

       Saturation level : 17,500 e,

       Number of photons : 150,000/s,

       QE : 33 %,

       Saturation level non-uniformity : 5 %,

       DC offset of the output : 125 mV,

       Output amplifier noise : 0.3 mV,

       Temporal row noise : 6 e,

       Fixed pattern row noise : 3e,

       Repetition frequency row FPN : 16 lines,

       Temporal column noise : 8 e,

       Fixed pattern column noise : 12e,

       Temporal pixel noise : 10 e,

       Fixed pattern pixel noise (offset) : 3 e,

       Defects : 100 pixels stuck at “1”, 100 pixels stuck at “0”,

       RTS pixels : 900 pixels, RTS chance : 20 %,

       Non-linearity SF : switched off,

       Non-linearity FD : switched off.

When studying the influence of the aforementioned parameters, the first measure taken is the correction of the defect pixels.  With this knowledge in hand, the first exercise can start : what is the temperature influence on the PTC, or how can the temperature being used to generate a Photon Transfer Curve ?  The answer will follow shortly.

 

Albert, 14-01-2011.

Merry Christmas and a Happy New Year

December 20th, 2010

 

At the end of the 2010 I would like to take the opportunity to wish all my readers a Merry Christmas and a Happy New Year. 

Looking back to 2010, it was clear that the economy was recovering.  I think that most of us had a great year in imaging.  I do hear many people telling me how busy they were in 2010 and how good their business was over the last 12 months.

If I look to my own activities, 2010 started with a successful International Solid-State Circuits Conference or ISSCC.  After a dramatic edition of 2009 (with strong reduction of participant numbers due to the economic situation), the 2010 edition partly recovered.  The number of attendees was not up to the record level of a few years ago, but the negative trend turned into a positive one !  Being the Chair of the International Technical Program Committee (ITPC), my responsibility was to put together the technical program.  Of course that is something I was not doing on my own, I got a lot of help of the subcommittee chairs and all technical committee members.  It was quite a bit of work, but it was also a great experience.  For me, the highlight of the conference was the plenary session.  First of all, this was the opening session of the ISSCC and I had the honor chairing the plenary session.  Talking in front of 3000 people gives goose bumps, I can assure.  But once the plenary session was finished, the conference ran (so to say) by itself.  Amazing how such a huge organization with 5 parallel sessions autonomously proceeded.  A lot of very important work is done behind the curtains, without these efforts the conference could never be that successful.  Thanks to all people involved in the ISSCC2010, it was a great experience working with you and I learned a lot from it !

More conferences were not attended in 2010, but I did visit the Photokina in Cologne and the Vision in Stuttgart (both are not so far away from my home).  It was amazing to see how Photokina evolved over the last 2 editions.  In 2008 Agfa, Fuji and Kodak, all three still had their own big hall where they exhibited their film business.  In the 2010 edition, all three exhibition halls were completely empty.  The film industry is completely gone.  On the other hand, there were a lot of electronic companies present that were not there in earlier editions.  It was clearly visible : the complete photography business has gone digital. 

My last visit to the Vision was at least 5 years ago, and actually I have to admit that the 2010 edition was a positive surprise to me.  The Vision really looks like an image capturing exhibition and that is my own playground.  It was quite funny to meet a lot of people at the booths of the various companies, but also while walking through the aisles many familiar faces showed up.  It is also surprising to me that many people recognized me because they followed one or more of my courses, but unfortunately in many cases I did not recognized them.  I would like to apologize for that, but over the last years I have had too many participants in my classes to recognize all of them or to remember all those faces.

As far as the Harvest Imaging teaching activities were concerned, the trend of recovering started already in the second half of 2009, and continued in the first half of 2010.  Several in-house as well as public courses could be organized.  The highlight of my teaching activities was the very first edition of my new course based on the hands-on evaluation of imagers and cameras.  I think it will not be a surprise to you if I tell you that I do like teaching very much, but the preparation of a new course is also very motivating.  “Teaching is learning twice” and that is fully correct.  The preparation of new course material forces me to think everything over and over again before I can put the material onto the sheets and present it in the courses.  The teaching and training will continue in 2011, and I plan to run the hands-on evaluation course at least two times in the coming year.  I am curious to see whether the participants will remain that enthusiastic about the course.  

I wish all of you the very best for 2011, and hope that we will regularly “meet” through this blog.  Thanks for visiting the website of Harvest Imaging, see you next year 😉

Albert, 20-12-2010.

 

 

 

What is next after the PTC discussion ?

December 12th, 2010

Over the last months I posted a lot of material w.r.t. the photon transfer curve, and the impact of various noise sources on the PTC.  After several blogs “in dark”, also the situation “with light” was studied. But in principle, the series about the PTC is now finished.  Then the question can be asked : “And what’s next ?”   In the early days of the blog, I got several comments on the material posted.  But over the last couple of months I hardly got any reactions.  So I do not know what “my customers” would like to see being discussed on the blog.  For that reason I am writing this post.  I am searching for subjects to talk about, write about and publish on this blog.  I would like to do another series of publications on a particular subject, but what kind of subject ?

A while ago, I got an interesting suggestion : post and discuss design errors ever made in imaging.  I would not pretend that I never made any design error, but also for this subject I would like to get input from others.  Is anyone willing to share his/her mistakes that others can learn from ?  If yes, they are utmost welcome.  If you want I can discuss and post them without any reference.  So the world-wide imaging community will never know where the errors came from. 

I am curious to see whether this request for suggestions, subjects, topics can start a new series of posts on my blog.  Looking forward to it !

Albert, 12-12-2010.

First Course “Hand-On Evaluation of Image Sensors”

December 5th, 2010

Two weeks ago I taught for the very first time the new course “Hands-On Evaluation of Image Sensors”.  The course location was Barcelona, and the organization was done by CEI-Europe.  Teaching a new course for the very first time is always a bit an adventure, because you do not know what the participants expect from it.  But this time it was even trickier,  a complete new course and for the very first time with measurement equipment in the class room.  I organized 10 laptops, 10 cameras, 10 light sources, 10 power supplies, testcharts, lightboxes, etc., all needed to perform hands-on evaluation of image sensors and cameras.   Fortunately the day before the course start I had a day off, so I had plenty of time to install the equipment in the class room and check out the hardware and software.  To be prepared for any further hardware disaster while running the course, my daughter Kim was standby during the two days, she has more experience with hardware and a soldering iron than I have myself.  But besides some minor issues, the hardware worked smoothly.  The soldering iron could remain in the travel case.

Day 1 : after a short introduction, I went through a first exercise together with all the participants.  The assignment was to proof that the noise in images is decreasing after averaging several images and show experimentally that the noise reduction is inverse proportional to the square root out of the number of images.  By itself a simple exercise to get acquainted with the equipment, to get familiar with the measurement software and to get a first idea about the difference between temporal noise (non-correlated in the images) and fixed-pattern noise (correlated in the images).

After this first getting-acquainted exercise I put all the participants to work in groups of two.  Their first assignment was to measure all fixed-pattern noise components of an unknown imager in an unknown camera by grabbing and analyzing images in dark.  Once the images were stored on the hard-drive, a short piece of code needed to be developed to calculate all parameters from the images obtained.  Some participants struggled a bit with writing the software, but after all I was surprised how quickly everyone was able to grab the images.  At the moment all groups obtained their results, the theory behind the exercise was explained and the parameters measured/calculated were discussed and compared.

Next measurement assignment focused on obtaining the temporal noise parameters of the image sensor and the camera based on the same set of images obtained earlier.  The same way of working was followed : first all groups worked separately and afterwards the results were discussed in a plenary session.

Day 2 : while the first day was focusing on doing measurements in dark, during the second day the lights were turned on.  Again in two sessions the fixed-pattern noise components were measured/calculated and later all temporal components were evaluated.  All these measurements were done quite quickly, because the code developed on the first day could be reused during the second day.  Measurements with light on the sensors at different exposure times gave rise to the famous “Photon Transfer Curve”.  All participants could experience how you can construct the PTC based on data obtained from multiple images.  But during the theoretical session it was also explained how you can generate a PTC based on just three images, on only two images and even a single image.  At the end of day 2 attention was paid on how to measure the MTF of a camera (based on a single image !) as well as on how to measure the spectral response of a camera (based on a single image !)

The overall feedback of the participants after the course was quite positive.  Several reactions sounded like : “I have learned a lot !”.  It is always good to hear that my “customers” are satisfied, but nevertheless after the first time running this course, I learned also a lot and will start to fine-tune the course for the next time.  What is going to change ?  Here is a list of action items I defined for myself :

          Some extra software functions/tools will be developed that can be used during the measurements/calculation so that the participants can focus more on the interpretation of the data,

          Extra images will be grabbed and stored on the laptops so that once the participants have the measurement code ready, it can be used for more different situations than the ones possible/available in the class room.  For instance images generated by other type of cameras, or images generated at lower or higher temperatures,

          Updating and optimizing the course material, this is always necessary after the first edition of a course.  Based on the questions and remarks from the participants during the course, I learned about the quality of my own sheets. 

Although I did already during the evaluation of the course, I would like to thank the participants once again for the cooperation during the course, for their feedback afterwards and especially for attending this very first edition of the “Hands-On Evaluation” course.  As far as I know this is a unique project in the world of imaging.  No one else organizes classes with hands-on measurements and evaluation of commercially available cameras.  It was also a very unique experience preparing this training and running it for the first time.  The updating of the course towards the second edition will take quite a bit of work, but nevertheless I am looking forward to do it.  The next “Hands-On Evaluation” course is scheduled for May 2011 in Copenhagen.

Albert, 05-12-2010.

A new noise source ? Yes, LE NOISE !

October 2nd, 2010

This week Neil Young released his new CD, with the title “Le Noise”.  As can be expected from Neil, this album is something completely unexpected.  The man plays solo, nothing special about that, but several songs are played with only an electric guitar.  He did this already once with “Mother Earth” several years ago, but then just one song to close a CD.  This Le Noise contains very astonishing music and very great sound, keep in mind that for the first time Neil works together with Daniel Lanois.  The latter is/was producer of many other great artists, as for instance Bob Dylan, U2, Peter Gabriel, Brian Eno, Talking Heads.

On the CD a couple of typical Neil songs can be found : “Angry World” is about the world of banks and money, while “Love and War” is another song against violence and the cruel things that happen around the world.  The rumor goes that all songs are recorded during the nights of full moon in the house of Daniel Lanois …. 

Over the last couple of months Neil Young lost several of his old friends, e.g. L.A. Johnson passed away, the one who produced Neil’s movie “Journey through the past” in the early 70’s.  Ben Keith unexpectedly died, Neils steel guitarist who was working with Neil since Harvest in 1972.  Neil very often called Ben “My Brother”.  Also Neil’s guitar technician, Larry Cragg is not longer joining Neil on his tours.  I do not know what happened to Larry who was working for over 40 years with and for Neil Young.  It is surprising that Neil did not include some tribute to his great friends on the new album.  Although he recently wrote a very nice song for L.A. Johnson, called “You Never Call”, but it is not included on Le Noise. 

I have no idea how many unreleased songs Neil must have in his library at this ranch, but for sure a lot.  And he has the good habit of regularly grabbing some of the older, unreleased material and put that on one of his new albums.  Also this time “Hitchhiker” founds its way to Le Noise.  The first attempts to write Hitchhiker date back into the ’70s, and since the early 90’s Neil included Hitchhiker in a couple live shows.  Great song, great voice, great guitarplay.  For those interested to get some goose bumps, check out :

http://pitchfork.com/tv/#/musicvideo/8384-neil-young-hitchhiker-reprise

Thanks Neil, thanks Old Black (although he plays most of the songs on White Gretsch 😉 !

Albert 2010-10-02

CMOS Imager Workshop, Duisburg, May 4-5, 2010 (2/2)

May 12th, 2010

5th CMOS Imager Workshop, Duisburg, May 4-5, 2010.

DAY 2

Boyd FOWLER (Fairchild Imaging, Milpitas, CA) : “Scientific CMOS Image Sensor”

The specific sensor architecture and sensor operation was explained, but that part of the talk was very similar to the one presented last year in Toulouse.  The most interesting details came when Boyd discussed the performance of the device.  He compared the high gain channel with the low gain channel, both in rolling shutter mode as well as global shutter mode.  Worthwhile to mention is the mean noise performance of the various modes :

          Read noise of 1.5 e- high gain channel in rolling shutter mode (100 MHz),

          Read noise of 9.2 e- low gain channel in rolling shutter mode (100 MHz),

          Read noise of 1.9 e- high gain channel in rolling shutter mode (290 MHz),

          Read noise of 10.3 e- low gain channel in rolling shutter mode (290 MHz),

          Read noise of 4.8 e- high gain channel in global shutter mode (100 MHz),

          Read noise of 13.3 e- low gain channel in global shutter mode (100 MHz),

          Read noise of 5.7 e- high gain channel in global shutter mode (290 MHz),

          Read noise of 14.7 e- low gain channel in global shutter mode (290 MHz).

The pixel is a 5T cell that can be operated with CDS in the rolling shutter mode, but without CDS in the global shutter mode (digital CDS is performed off-chip).  That is the explanation why the rolling shutter mode is that much superior over the global shutter mode.  Together with all these numbers, several histograms about noise distribution and dark current distribution are shown.  At the end of the talk, images were shown of the colour version of this device, as well as of the BSI-ed version.  QE levels over 90 % were reported (if the appropriate anti-reflective coating was applied). 

 

Alex KRYMSKI (Alexima, CA) : “Design of CMOS imagers : Selected Circuits & Architectures”

The focus of the presentation was a series of useful circuits and architectures that were created over the last decade.  If you tell people that you will talk about useful circuits, you inherently admit that there exists something like useless circuits as well, and indeed Alex did.  He even showed examples of useless circuits.

Examples of useful circuits were : clamping source follower input and output, dynamic source follower, driving rows from both sides, multiple ADCs per column of the imager, multiple busses in the pixel array, pipelining top-bottom and block-memory readout.  Together with circuit diagrams, Alex explained the working principles of the circuits and illustrated the concept with products in which these circuits are applied.  Also the device for which he and his co-workers received the 2003 Walter Kosonocky Award was highlighted.

 

Guy MEYNANTS (CMOSIS, Antwerp, Belgium) : “CMOS image sensors for industrial applications”

The outline of Guy’s talk :

          High speed imaging and other requirements for industrial imaging,

          CIS architectures with analog architectures for the fastest, customized imagers, (conclusion about this chapter : analog offered the highest speed so far, mainly obtained by parallelism, but there are limited capabilities to further speed up),

          CIS architectures with on-chip ADC for easy-to-use and easy-to-integrate imagers, (great overview of the various ADC architectures was given),

          Case study : 2 Mpixel imager at 340 fps with global shutter and CDS.  A new shutter type implemented with an 8T pixel was shown with a parasitic light sensitivity of 1/60,000.  Taken into account a pixel size of 5.5 um, this is an amazing performance. 

 

Walter RUETTEN (Philips Research, Aachen, Germany) : “Solid-State X-ray Imaging”

The talk started with an overview of the various X-ray detection systems : Screen-film system, image intensifiers, fully digital solid state detector systems, photoconductors and scintillators.  A parameter that is not that often used in (consumer) digital imaging is the Detective Quantum Efficiency.  Although in the medical imaging field, the DQE is a very important characteristic.  Walter explained the definition as well as the importance of DQE.  A nice example based on numbers and figures (that is what engineers like to see) explained the typical noise issues in an X-ray system.

The second part of the talk was concentrating on monolithic silicon detectors : large area CMOS image sensors, 3 sides buttable that allow to build very large detectors (40 cm x 40 cm).  A first test chip of such a detector is available with high speed readout architecture implemented.  With this test chip most features of the large area detector can be tested.  The very first X-ray experiments show at least the same or even better DAE with higher resolution compared to the commercially available detectors.

What the future going to bring in medical imaging ?   A potential next step could be spectral imaging in which the energy of the X-ray can detected (“colour X-ray”).  Apparently an interesting “X-ray future” ahead of us.

 

Werner BROCKHERDE (Fraunhofer IMS, Duisburg, Germany) : “Solid-State ToF Sensors”

3D imaging is a hot topic these days.  A typical sensor architecture used to detect the third dimension is ToF : time of flight.  ToF can be realized by three approaches : direct time measurement, continuous wave modulation, pulse modulated light.  All three principles were explained, compared and benchmarked.  At IMS, the pulse modulation technology is explored.  A major issue in 3D imaging is the speed of the pixels, measurements need to be done at the “speed of light” and pixels need to be emptied within the same timescale.  To come to this point, photogate pixels and pinned photodiode pixels with a built-in lateral drift field are developed.  Examples and results of both architectures are shown in the talk.

 

Ulrich SEGER (Robert Bosch, Germany) : “Imaging sensors for driver assistance applications”

It should be clear that vision systems and intelligent cameras can add a lot of functionality to driver assisted applications.  Near IR vision can help a lot in detecting obstacles, etc.  Several of these examples are known.  In this talk, Ulrich highlighted a couple of issues that were encountered with the CMOS image sensors used.  A first example was sun-burn-in.  Too heavy sunlight focused on the image sensor could generate some nasty burn-in effects, showing up as FPN.  This effect was corrected by changing/adapting the processing of the micro-lenses on top of the image sensor.  A second issue was the drift of the FPN after the camera was assembled.  This effect had to do with the UV damage introduced during the assembly process.  

These two examples show that the harsh automotive environment can put extra constraints and requirements on the imagers/cameras when used for automotive purposes. 

 

Martin WENDLER (Pilz, Ostfildern, Germany) : “Safe CMOS camera system for three-dimensional zone monitoring”

The very last talk of the workshop was focusing on the application of a logarithmic CMOS image sensor with a global shutter to protect a 3D zone of an industrial environment.  The safety requirements of such an applications are extremely high, and for that reason a camera is needed with an imager that complies with :

          High dynamic range (120 dB),

          Triggerable global shutter,

          Logarithmic characteristic curve.

Interesting talk to hear the discussion of an image sensor and its requirements presented by the customer.

 

 

 

CMOS Imager Workshop, Duisburg, May 4-5, 2010 (1/2)

May 9th, 2010

5th CMOS Imager Workshop, Duisburg, May 4-5, 2010.

DAY 1

 

Holger VOGT (Fraunhofer IMS, Duisburg, Germany) : “Devices and technologies for CMOS Imaging”

The first talk on the first day gave an good introduction to the workshop.  In the first part of the talk several CMOS detectors were reviewed (photodiode, buried photodiode, pinned photodiode and photogate pixels).  Special attention was given to the effect of emptying the pixels at higher speed and how to introduce a lateral drift field in the pixels.  At the end of the talk several projects and topics were illustrated that form part of the IMS research portfolio.  Examples are  :

          Colour by metal grids,

          Colour by depth sensing in the silicon,

          The low noise double modified internal gate pixel,

          SPADs,

          BACKSPAD (back-side illuminated SPAD) and,

          Uncooled Bolometers. 

Nice opening of the workshop because of the wide overview, with a bit of publicity for the Fraunhofer IMS institute.  But they deserve it, because they are the organizer of the workshop.

 

Lindsay GRANT (ST Microelectronics, Edinburg, UK) : “CMOS image sensors and technology”

In the mean time I have heard several presentation of Lindsay, and they all come down to a very broad overview of the CMOS technologies needed for mobile imaging.  And if you have heard a few of them, you get a very good insight in how rapidly this technology is evolving.  The topics addressed in this talk are too many to list here, but (for me) the main ones are :

          0.9 um pixel size on the roadmap, 1.1 um in demo,

          Progress in pixel modeling (optical and device physics),

          Pixel optics,

          Colour improvements,

          Back-side illumination and crosstalk,

          SNR performance metric.

In his last sheet he tried to show us : “What’s next ?”  In short :

          The pixel race continues,

          Front-side illumination will remain cost/performance competitive,

          Sensor image quality assessment will continue to a topic in active research.

At the end of the talk Lindsay acknowledged the late Peter Denyer for his inspiring leadership.

 

Mark ROBBINS (e2v, Chelmsford, UK): “Electron Multiplying CCDs”

The EM-CCD is intended for imaging in a photon starved environment where all sources of noise must be minimized.  EM-CCD reduces the effect of charge to voltage conversion noise and noise from the video chain.  After a short description of how the EM-CCD works, Mark spent quite a bit of time on the introduction of the noise factor and on the dependency of the gain as a function of temperature and gate voltage.  He showed nice results for the EM-CCD in the photon counting mode.  In the last part of the talk, the Rose criterion was introduced to quantify the visibility of a feature in a noisy image.  The theory was illustrated with images under extreme low light level conditions.  As can be expected, the ultimate low-light level sensor will be the EM-CCD in combination with back-side illumination.   Interesting to notice that up to this point in the workshop, all speakers were referring to BSI.

 

Frank ZAPPA (Politecnico di Milano, Milano, Italy) : “SPADs”

In the overview presentation about SPADs, Frank addressed the following topics :

          Single photon counting and timing, such as PMTs, special CCDs (EM-CCD, I-CCD), SSPD, SPAD

          Single photon avalanche diode,

          Circuital modeling, static as well as dynamic,

          Devices structures with focus on planar versus reach-through,

          Processing technologies with focus on custom versus CMOS,

          Circuits : monolithic versus smartchips, detection as well as counting chips,

          Arrays for single-photon imaging.

As a conclusion, Frank stated that SPAD detectors and arrays, microelectronics and instrumentation are available, know-how is present for custom development, and commercial products based on SPADs are available on the market.

 

Gerhard LUTZ (PNSensor, Munich, Germany) : “Silicon Radiation Detectors”

Sometimes one forgets that there is much more than CCD or CMOS image sensors to detect radiation, but Gerhard put us back with two feeds on the ground.  He discussed the basic detection process of radiation in semiconductors, reviewed the basic principles of semiconductor detectors such as the reverse biased diode, the semiconductor drift chamber and the DEPFET detector-amplifier structure.  It was quite funny to see the presentation of the good old junction CCDs, never thought that still some products were made out of this technology.  But the more you think about, the more intriguing the devices are.   The same is true for the DEPFETs.  These unique devices are able to satisfy a variety of different requirements depending on specific applications.  More sophistic variations of this structure have been invented, their functioning has been proven by simulations and by measurements of finished devices.   

 

Albert THEUWISSEN (Harvest Imaging, Bree, Belgium) : “Noise : you love it or you hate it”

A simulation and evaluation tool is described.  The simulation tool accepts the specification of an image sensor as input and creates images.  One of the main applications of this simulation software is the study of the various noise sources present in an imager/camera.  The artificial images created can be the input for the evaluation tool.  But also images generated by a real camera can be used as the input for the evaluation tool.  During the presentation an example was shown of the combination simulation-evaluation of images.  Also real images generated by a CMOS camera were analyzed.  During the talk the main focus was on images created in dark.  Even without any light input several important noise contributions can be measured/analyzed.  The algorithms applied in the evaluation tool will be part of the new training course that will be offered by Harvest Imaging later this year.  

 

Pierre MAGNAN (ISAE, Toulouse, France) : “Ionization effects in CMOS imagers”

In the first part of the presentation the theory of the different defects and artifacts that can be generated by radiation were discussed.  It was clearly shown how complex the physics are behind radiation effects in CMOS image sensors.   Attention was given to :

          The generation of electron-hole pairs in the various materials involved,

          Charge transport in the silicon dioxide,

          Charge trapping in silicon dioxide,

          Radiation induces interface traps.

Then the question was answered : and what is going to be the influence on the CIS performance parameters of all these beautiful artifacts ?  I can be expected, in the first place the dark current will increase, but also the light response will be changed, unfortunately the light response will become lower.    

Pierre ended his talk with some ideas about how to make a design radiation hard.

 

 

ISSCC Report (6)

February 17th, 2010

Continuation of previous blog.

 

       Makoto Ikeda (University of Tokyo) : “3D range image capture technologies”.  A detailed  overview of 3D capturing techniques was given :

o   triangulation (stereo matching, light section method),

o   time-of-flight (time measurements and correlation techniques) and,

o   interference method.

All methods discussed were illustrated with its theory, devices, circuits, and measurements.  Of course there does exist a single best solution for all situations.  All methods have their own benefits and limitations.  According to Ikeda-san, the stereo matching is limited by its computational speed, the light section is limited by the integration time, the mechanical speed and its robustness, the direct TOF is limited by the readout speed, dark current rate, and robustness, while the correlation method is limited by its integration time.

       Levy Gerzberg (Zoran) : “High Speed Digital Image Processing”.  Levy showed in his talk a few interesting example of digital image processing, such as blur correction (due to camera shake), lens distortion correction, dynamic range correction, colour management.  Unfortunately he did not explain the technical algorithm used to take all these measures. 

He also made clear that in the future the processing will become much more complex than ever before.  For instance, the algorithms applied in cameras will rely on the content of the images and can change from picture to picture.  He just mentioned one example : red eye correction will be different for children and adults.  So the processing needs to find out whether a child is present on a picture or whether it is an adult before the correction of the red eye can be done.

       Masatoshi Ishikawa (University of Tokyo) : “Vision Chips and Its Applications to Human Interface, Inspection, Bio/Medical Industry and Robotics”.  Ishikawa-san explained the need for “medium” speed imaging and explained a super vision chip containing in-pixel processing, and then the real show started !  Ishikawa showed fabulous demos, such as gesture recognition (with one of his head-banging students), multi-target tracking, 3D shape recognition (illustrated with a book flipping scanner), bio/medical demonstration with the inspection of moving sperm, robotics (illustration of a raw-egg catcher, a robot catching a mobile phone in free space, two robots playing base-ball, dynamic dribbling.

Together with the great examples of high-speed vision Ishikawa gave very funny details about the preparation of the demos.  He really “entertained” the audience.

       Ronald Kapusta and Katsu Nakamura (Analog Devices) : “High-Speed Analog Interfaces for Image Sensors”.  Nakamura explained how Analog Devices tries to fit their analog interface circuits to a wide variety of input signals coming from different sensors of different vendors, all with their own specification.  He explained the needs for input reconfigurability for inter-operability and the design challenges of the analog interfaces to achieve low crosstalk and matching at or below 14-bit level.  When he came to the explanation of a kTC-noise reduction circuit, an interesting discussion started about the presence and reduction of kTC noise in the analog circuitry Nakamura showed.

       Jean-Manuel Dassonville (Agilent) : “Test Challenges of High Speed Imaging Serial Busses in Mobile Devices”.  The agenda of this lecture was based on :

o   requirements and technology trends on imaging interconnects, such as higher throughput, less PCB space, less skew issues, less clock issues, provide mechanisms for reliable data transfer, reduce power consumption,

o   main attributes of high speed serial interconnects,

o   overview of some imaging interconnects (MIPI, HDMI, MDDI, DisplayPort),

o   typical test requirements and challenges.

 

The forum ended with a short panel discussion.  A strong program, excellent speakers, good documentation (with a lot of references included in the printed material) made this of exhausting day a great forum !

 

Albert 2010-02-17

ISSCC Report (5)

February 14th, 2010

At the International Solid-State Circuits Conference a one-day forum was organized and chaired by Johannes Solhusvik (Aptina) entitled : “High-Speed Imaging Technologies”.  An appealing program was put together, and here I would like to list a few highlights.  After the opening by Johannes, the following topic were discussed :

       Boyd Fowler (Fairchild Imaging) : “High Speed CMOS Pixel Physics and Electronics”. Boyd’s talk showed that high-speed imaging is more than just a fast readout !  From his presentation the following lessons could be learned :

o   QE and MTF are inversely related : a larger QE needs to allow collection of deeply generated electrons, but unfortunately deeper generated electrons will freely diffuse through the non-depleted substrate and cause contrast losses,

o   carrier transport needs to be dominated by drift for high-speed detection, if the carrier transport is dominated by diffusion, it will take too long before the carriers are collected in the diodes,

o   lag must be eliminated during carrier transport,

o   digital readout (in-pixel ADC) is about an order magnitude faster than analog,

o   high speed sensors need to be fully depleted on a thin substrate, this is basically a combination of the aforementioned arguments.

Some of these statements were illustrated by means of simulations.

       Jan Bosiers (DALSA) : “High-Speed Imaging with CCDs”.  His talk contained details about :

o   high-speed operation of CCDs in general,

o   high-speed readout architectures,

o   high speed capture concepts and

o   details about a high speed CCD imaging sub-system.

Jan’s talk contained quite some details about the ISIS sensor of prof. Etoh (fabricated by DALSA).  This device (with BSI !) is capable to capture 1 M frames per second.  Great animations (with an NHK camera) showed the capabilities of this camera and sensor.

       Guy Meynants (CMOSIS) : “High speed CMOS image sensor architectures”.  In his talk Guy addressed the following topics :

o   High speed CIS architectures with an analog output, focusing on column load schemes, track-and-hold circuit, column amplifier, multiplexer design, ROI readout, column bus construction, analog buffers and power multiplexing,

o   High speed CIS architectures with ADC on-chip, with column ADC, global shuttering.

Guy’s conclusion was actually straight forward : parallelism is heavily applied to speed up the devices.  He illustrated this with many examples and with a nice demo at the end of his talk.

       Shoji Kawahito (Shizuoka University and Brookman Technology) : “Column readout circuit design for high-speed low-noise imaging”.  The following items were addressed :

o   Analog versus digital column readout,

o   Column readout with preamplifier,

o   Columns level ADC with accelerated readout timing,

o   Source follower noise analysis,

o   Column ADC with preamplifier and CDS,

o   Column parallel ADC architectures,

o   Digital CDS

o   Figure of merit of column ADC and imagers.

 

Will be continued,

 

Albert 14-02-2010.

ISSCC Report (4)

February 11th, 2010

Continuation of previous blog.

 

       S. Itoh et al. (Shizuoka University and Toyota) : “CMOS imager for LED based optical communication”.  This paper described an application I never heard of before : traffic lights, road signs and backlights of cars can be made of LEDs that send out a certain coded signal while normally activated.  These codes can be captured by a camera and the output of the camera can be used to drive/steer/manage a car driving on the road.  One prerequisite is that all the traffic lights, road signs and backlights need to be made out of LEDs !

The authors described a CMOS image sensor with a double set of pixels organized in interleaved columns, for instance, all odd columns contain pixels that can be used for normal imaging, all even columns contain pixels intended for the capturing of the coded signals transmitted by the LEDs of the traffic lights, road signs and other cars on the road.  The pixels in the even columns are also based on a pinned photodiode in combination with a lateral charge overflow drain.  Very interesting application in traffic control !

 

       S. Mandai et al. (University of Tokyo) : “3D range-finding image sensor”.  A 3D method based on a projected sheet beam is presented.  When using a 2D imaging array it takes quite a bit of time to analyze the image and to “find” the reflected beam in the image.  In this paper a clever row-parallel search and address encoding architecture is presented.  It makes use of a row-parallel binary search tree and address encoding for high-speed detection and read-out of activated pixel addresses.  In this way the speed in detection of the depth in the image can be increased by an order of magnitude. 

 

       D. Stoppa et al. (Fondazione Bruno Kessler) : “Range image sensor based on lock-in pixels”.  The range finding architecture based on lock-in pixels is not new, but the presented pixel is.  The authors showed a (prior art) lock-in pixel based on a pinned photodiode, but illustrated also the drawbacks of this pixel : limited speed, because of the electric drift field in the pixel is too low.  By increasing the drift field in the pixel the speed can be increased, and this is done by adding two poly-Si gates on top of the pinned photodiode.  These two poly-Si gates will be toggled during the operation.  A simple idea, but it has some very interesting benefits : the authors show that the speed of the pixels can be increased (almost by two orders of magnitude) as well as the demodulation contrast (more than a factor of 2).  It should be remarked that, although the authors call this a pinned photodiode pixel, actually the pixels are not pinned.  Gates on top of a real pinned photodiode will not create any drift field in the pixels, but in the paper the top-layer of the pinned photodiode is no longer a p+, but a p-.  Actually what is realized is not a pinned photodiode, but a buried CCD p-type channel.  Basically, it is not important how the thing is called, the performance is what really matters.

 

       T. Azuma et al. (Panasonic) : “4K x 2K image sensor based on dual resolution and exposure technique”.  The presenter started with an interesting statement : the sensitivity of solid-state image sensors is increased by a factor of 40,000 compared to the device originally invented by Boyle and Smith.  Up to the reader to judge whether this is correct or not !

In the paper a sensor is described that tries to improve the sensitivity with a factor 4 more.  This is done by an addressing architecture that allows to read and treat the green pixels independently of the blue and red ones.  In this way the green pixels will get an exposure time 4 times as large as normal.  This is the sensitivity gain, but of course pretty nasty motion artifacts will be introduced.  The blue and the red pixels get the normal exposure time, but in the red and blue channel the gain is created by pixel binning in 4 x 4 mode (actually you get even a factor of 16 in sensitivity !).  Clever signal processing should use the information of the blue and red channel to correct the motion artifacts of the green channel. 

 

       H. Wakabayashi et al. (Sony) : “10.3 Mpix back-side illuminated imager”.  The authors presented the backside illuminated device that was already introduced by Suzuki during his plenary talk.  A few extra details were shown : the device can run at 50 fr/s at full resolution of 10.3 Mpix with 10 bits resolution.  In the case of 12 bits, the speed slows down to 22 fr/s, or in the case of 60 fr/s, the resolution goes down to 6.3 Mpix.  Operation at 120 frames/s is also possible with then with only 3.2 Mpix.  Apparently this sensor should be able to combine DSC and video in one device.  Pretty interesting numbers were given for the angle dependency : for a ray angle of 10 deg, the sensitivity drop is only 9 %, and moreover the difference between the three colour channels is less that 1 %.  At a ray angle of 20 deg. the remaining sensitivity is slightly less than 60 %, and the difference in colour channels is 3 % (based on the figure shown in the proceedings).  The presenter mentioned that the cross-talk of the back-side illuminated sensor is optimized by means of an extra metal shield at the back of the device, by optimized doping of the photodiode, by optimized doping of the substrate and by an optimized wiring of the metal layers on the front side. 

Some numbers : 0.14 um process, 1P4M, LVDS output, lag below the measurement limit, power consumption 375 mW for the HD video mode.  Most of the other performance data is mentioned in one of the earlier blogs.

 

Conclusion : very good image sensor session, very interesting papers, all presenters had very well-organized talks, high quality sheets.  Excellent work of Dan McGrath (and his crew), who is chairing the image-sensor subcommittee of ISSCC. 

 

Albert 2010-02-11