Archive for November, 2012

Symposium on Microoptical Imaging and Projection (3)

Thursday, November 29th, 2012

The morning sessions of the third Symposium day concentrated further on the technology of microoptics.  Pierre Craen (poLight, Norway) gave a talk about a new auto-focusing system for mobile phone applications.  The technology is MEMS based : a polymer is sandwiched between two glass surfaces.  The lower one is a rigid glass plate, the top one is a glass membrane which is deformable through a piezofilm (driven at 20V).  The deformation of the glass membrane is transformed into the polymer and in this way a deformable lens can be created (looks a bit similar to the fluid lens of Varioptic, but now with glass and polymer instead of water and oil). 

During the presentation several measurements and data were shown.  What I could grab : 1 ms reaction time, 5 mW power consumption, very small, very thin (0.4 mm thickness), transmission > 95 %, diffraction limited, re-flowable at 260 deg.C, wafer scale technology (8”) and a wide temperature range (- 40 deg.C to 200 deg.C).  Also mentioned was the limitation of the technology to small apertures (up to 1.7 mm, corresponding to maximum 1/3” or 1/2.5” image sensors).  The technology is named : Tunable Lens or TLens.

Is this technology capable of kicking the VCMs ou of the mobile phones ?  According to Eric Mounier (Yole Developpement, France) VCMs still have 95 % market share.  Nice opportunity for poLight, but also nice challenge for the TLens.    

Albert, 29-11-2012.

Symposium on Microoptical Imaging and Projection (2)

Wednesday, November 28th, 2012

Here is a quick overview of the second day, mainly devoted to the technology of the micro-optics.

Stephan Heimgartner (Heptagon) started the day with a talk about wafer-level micro-optics for computational imaging.  He highlighted the technology of Heptagon, ranging from wafer-level optics (= lenses on a glass wafer of 8”) to wafer level packaging of these lenses.  The most complex wafer level packaging technology does include 4 lenses (= 2 wafers with lenses on both sides), 2 spacer structures and an IR cut-off filter.  This stack of optical elements is used today on low-resolution sensors.  For the higher Mpix sensors this technology is not suitable.  The reason for this is the limitation that is defined by the tolerances of all the various materials and structures involved.

In the second half of the talk Stephan explained the Heptagon technology for multi-aperture cameras.  Remarkable is the location of the colour filters : on top of the micro-optical stack.  Also intriguing is the back focus adjustment of the structures that can be done after the lens stack is completed.  During the talk a prototype of a 2 x 2 multi-aperture camera was shown built on a 2M pixel sensor.

Zouhair Sbiaa (Nemotec) more or less confirmed what was already told by the previous speaker.  The optical modules built by means of the wafer-level technology are limited to two wafers due to tolerances.  Zouhair showed a proto-type of a micro-optical component on top of a HD 720p sensor.  This optical module was individually place on top of the sensor.

Although not indicated in the program, Steven Oliver (Lytro) gave a talk about their light field camera.  The talk started and ended with some marketing stuff, but in between there were some very interesting slides shown.  With the light field camera, more freedom can be generated in (after-)focusing of the image, but also some freedom in perspective view is possible.  Playing around with filters (in the software) can add more features to the image, also great was the demo with the movement of light and shadows.

More on the technical side : the 3.0 camera (intended for social media) has a high quality lens included (8x magnification, F2).  Just in front of the sensor, a micro-lens array is placed.  The latter had 330 x 330 micro-lenses, arranged in a hexagonal grid.  The pitch of the micro-lenses is 13.9 um, and every micro-lens is covering 10 x 10 pixels of the sensor.  The sensor itself is a 14M pixel device, finally cropped into 11M pixels.

Flavien Hirogoyen (ST Microelectronics) gave us literally a deep insight in the pixels by showing great results about his simulations of the optical characteristics of the CMOS pixels.  The pixels were 1.45 um in size, and it is remarkable how good the FTDT (Finite Difference Time Domain) simulations fit to the measured data of quantum efficiency.  The speaker showed results for monochromatic light as well as for white light of different colour temperatures.

Andreas Spickermann (IMS Fraunhofer, Duisburg) concluded the morning session with an overview of what his company is able to do in the field of CMOS image sensors.  Too much actually to list here, but worthwhile to mention : IMS Fraunhofer have their own 0.35 um opto process with pinned-photodiodes, colour filters and micro-lenses.  Besides many others, an interesting option is the use of a nitrogen-enriched Si3N4 layer for the passivation.  The layer has a relatively larger transmission in the region above 200 nm wavelength.

Palle Dinesen (Kaleido) kicked off in the afternoon with an all-glass solution approach for micro-optics.  He gave a nice overview of all issues involved with the making of the tools to mold the glass lenses.  Glass has the advantage over plastic to be much thinner, less sensitive to temperature variations .  The know-how of Kaleido is situated in the making of the tools (by a grinding method) as well as in the molding process.  What could be understood : the material is pre-heated before it enters in the molding chamber.  The molder chamber is kept at a relative high temperature, and after the molding the end-result gets a specific cooling-down treatment.  The formation of the lenses on two sides of the carrier is done in one step.  The technology was demonstrated by means of a proto-type on 2”, and now the technology is expanded to 4”.  Mass production will start in Q3/2013.

The final presentation came from Reinhard Voelkel (Suss Microoptics) who tried to give us an answer to the question : “How many channels do you need in an array ?”.  To get an answer on this question, the speaker checked what is present in nature.  Conclusion : or 2 (there were many examples in the room), or 8 (some spiders seem to have 8 eyes) or many (insects).  So draw your conclusion ….

One final word about the organization of the symposium : perfect !  The organization even distributed rain coats to the participants.

Albert, 28-11-2012.

Symposium on Microoptical Imaging and Projection (1)

Tuesday, November 27th, 2012

Today, November 27th, the brand new Symposium on Microoptical Imaging and Projection started at the Fraunhofer Institute in Jena (Germany). Due to my late arrival at the Symposium and my lack of knowledge in projection, I only attended 7 talks, of which 5 were on imaging. Most of the imaging stuff today was about multi-aperture cameras. Unfortunately I have to report that not that much new stuff was presented in the first two talks. The presentations of Pelican Imaging (here given by Jacques Duparre) and the one from LinX Imaging (given by Ziv Attar) were repetitions of the ones I heard before. What I memorized from Ziv’s talk is the fact that a multi-aperture camera still has some challenging issues to solve. To name a few : manufacturability of the optics, packaging, sensor compatibility, image processing, lack of standards, processing power needed, and power consumption. After the talk, Ziv advised me to be positive, so here is a list of advantages of a multi-aperture camera : low height (Z dimension), zero colour cross-talk, simple colour filter technology, very simple colour correction matrix, depth sensing, extremely wide depth of focus, wide viewing angle, lack of auto-focusing system up to 10 M pixels, fully independent control of each camera in the array, wide dynamic range, … . Maybe I am still forgetting some.

Interesting was the work reported by Andreas Brueckner (Fraunhofer Institute, Jena). He presented a multi-aperture camera based on a regular 2D CMOS image sensor of 3M pixels provided with a dedicated lens array to make a multi-aperture camera out of it. Andreas presented also some images as well as numerical data. An engineer likes to see numbers (although Neil said “Numbers add up to nothing”). At the end of the talk, Andreas announced that they are working on a much higher resolution imager than the one used today.

Next was the talk of Edward Dowski (Ascentia Imaging, Boulder, CO), who is added a coding grid on top of the multi-aperture cameras, e.g. to depolarize the incoming light. Different apertures can be coded in a unique way relative to other channels in multi-aperture system. This enables in many applications the location estimation of general objects to sub-pixel precision.

The last multi-aperture solution was presented by Guillaume Druart (ONERA, Palaiseau, France) in which he is using the device for IR sensing ! A regular IR sensor is foreseen by an array of 4 x 4 lenslets, which allows the focal length of the individual lenslets to be 4 times less than a regular lens in front of the full resolution device. The lenslets are placed and/or designed such that the 16 sub-arrays do not “see” the same information. So out of the 16 low resolution images, a single high resolution end result is constructed. Nice video of moving image concluded the talk. See you tomorrow through a multi-aperture camera ?

Albert, 27-11-2012.

How To Measure Conversion Gain ?

Thursday, November 15th, 2012

The conversion gain of an imager or imaging system is linking the output to the input. In the good old days of the CCDs it was simply the ratio of the voltage variation at the source follower output versus the amount of electrons supplied to the floating diffusion. The output voltage variation could be measured by means of an oscilloscope (good old days !).  And the amount of charge supplied to the floating diffusion could be characterized by measuring the reset drain current. A simple, but efficient method, because in most CCDs the reset drain voltage has/had a separate connection and is/was not connected to the supply of the source follower.

With today’s CMOS devices that is different. The in-pixel source follower and in-pixel reset transistor have a common connection, and separate measurement of the reset drain current is no longer possible. But there is still the Photon Transfer Curve (PTC) that can help to characterize the conversion gain of the complete CMOS imaging chain : how many volts or how many digital bits do we get out for every electron generated and transferred to the in-pixel floating diffusion ? Of course I do realize that we have spent a lot of time and blogs on the PTC which can be used to measure the conversion gain. I will not repeat all that great stuff over here. The various PTC options to obtain the conversion gain are the shot noise method (= noise versus effective signal) and the mean-variance method.

Besides the CCD reset-drain method and the PTC method, a third possibility exists to characterize the conversion gain. Although with today’s safety rules, this method is no longer that popular. But using a radio-active Fe55 isotope in front of the sensor can do the job. Fe55 has an energy of 5.9 keV and is generating in silicon about 1620 electrons. In the case the sensor has large pixels and the radiation source is kept “far away” from the sensor, the chance is pretty large that some pixels are hit by a single X-ray photon and most of them are not hit at all by the incoming X-rays. In this way some (large !) pixels will nicely collect all and just all 1620 electrons generated by a single incoming X-ray photon. Simple and efficient !

Albert, 15-11-2012.

Curved CCD sensor and more at Vision Stuttgart 2012

Wednesday, November 7th, 2012

Yesterday I quickly visited the Vision 2012 show in Stuttgart.  The buzz words at the show were 3D cameras, high speed, USB3.0.  On sensor level, there were four items that really impressed me :

–          A curved CCD sensor at the booth of Andanta.  There were already rumours that people are working on curved sensors and I was contacted a few years ago to prepare an R&D proposal for curved sensors.  But now it is the first time that I saw one.  The sensor was bent in two (!) directions with a curvature radius of 500 mm.  The device has 16 M pixels, with a size of 6 cm x 6 cm.  The packaged device was mounted on a PCB, but no working devices were shown,

–          A 300 mm wafer with full-size CMOS imagers (36 mm x 24 mm) at the booth of CMOSIS.  They displayed a complete 300 mm wafer with the sensor that is fabricated for the Leica M-camera.  Taking into account that I started my career with wafers of 50 mm ( 2 inches) diameter ….,

–          A global shutter HDR sensor at the booth of New Imaging Technology.  Knowing their pixel architecture I think it is a major achievement to incorporate a global shutter mode in the sensor.  NIT had a working camera to show to the visitors,

–          Another remarkable observation : when passing by the booth of Kappa, my eyes caught an older gentleman who was explaining the product portfolio of Kappa.  He had a CCD attached to the collar of his jacket, and I immediately recognized the device.  Maybe I am the only one at the whole Vision show that has some connections to it :  it was an NXA1011 of Philips, the one I designed in 1983 when I started working at Philips Research.  This imaging world is really a small world !

Albert, 7 November 2012.