Archive for the ‘Uncategorized’ Category

Good Bye 2016 ! 

Friday, December 23rd, 2016

Again another year (almost) has passed.  I know it sounds a bit silly, but time is flying by, and I do have the impression that everything is moving faster than ever before.

2016 started with a great special issue of IEEE Transactions on Solid-State Imaging in January.  I had the honour of being the guest-editor-at-large for this special issue.  (What does the title of guest-editor-at-large mean ?  A lot of work !).  But I am a big fan of IEEE-ED and IEEE-JSSC, because these journals are great sources of information from and for our community.  So I was really pleased with the invitation of IEEE to serve as the guest-editor-at-large and I am happy that I could cooperate with my soul-mates in imaging.

In 2015 Harvest Imaging came with a new product on the market : a reverse engineering report of a particular imaging feature present in a commercial camera.  The first reverse engineering report was devoted to Phase Detection Auto-Focus Pixels.  And in the meantime, in 2016 I started with a new project.  Because the new project is still in the preparation phase, it is difficult to disclose the topic, but it will be based on tons and tons of measurements.  Recently I bought an EMVA1288 test equipment and I do hope to get started with it sometime after New Year.

The Harvest Imaging Forum 2016 was targeting “Robustness of CMOS Technology and Circuitry”.  I do have to admit that the interest in the 2016 Forum was less than in the 2015 Forum.  Something I do not immediately understand, because the robustness of CMOS is a topic that should be of interest to our imaging community as well.  The main objective of the Harvest Imaging Forum is to touch topics that are somewhat out of my own core expertise, but still important subjects for solid-state imaging.  (For subjects that belong to my own expertise, I do not have to hire external instructors of course.)  Nevertheless, Harvest Imaging will continue with the Forum, also in 2017.  I do have a topic and a speaker in mind, but the speaker himself does not know yet.  More info will follow  in the Spring 2017 I guess.

Although (or maybe just because ?) we did not had a new IISW in 2016 (the next one will be in 2017), 2 new conferences were launched in Europe : the AutoSens and the MediSens.  I attended both, also because both of them are organized by a good friend of mine, Robert Stead and his crew.  I was happy to see that new applications were introduced by young engineers that are working in the solid-state imaging field.  I am pretty sure that the next generation will be capable of continuing to grow the solid-state imaging business.  Imaging was never ever that big and appealing as it is today, and I am pretty sure that in the future imaging can and will become only bigger.

Welcome 2017 !  Looking forward to another great imaging year, with the IISW in Japan !

Wishing all my readers a Merry Christmas and a Happy New Year.  “See” you soon.

Albert, 23-12-2016.

 

Signal-to-Noise Ratio (SNR)

Friday, December 16th, 2016

The Signal-to-Noise Ratio quantifies the performance of a sensor in response to a particular exposure.  It quantifies the ratio of the sensor’s output signal versus the noise present in the output signal, and can be expressed as :

SNR = 20·log(Sout/?out)

With :

  • SNR : signal-to-noise ratio [dB],
  • Sout : output signal of the sensor [DN, V, e],
  • ?out : noise present in the output signal [DN,V, e].

Notice that :

  • the output signal and the noise level need to be expressed in the same way : in digital numbers (DN), in Volts (V) or in number of electrons (e),
  • the specification of the SNR only makes sense if also the input signal is clearly specified. Without input signal, there is not output signal,
  • the noise is the total temporal noise of all parts, included in the pixel itself as well as the readout chain of the pixel. For some applications the photon shot noise is included in ?out as well, for others it is not (see further).

A few important remarks w.r.t. signal-to-noise ratio :

  • the signal-to-noise ratio specified for an imager is a single number that is valid for all pixels. Because the pixels are analog in nature, they all differ (a little bit) from each other.  About 50 % of the pixels will have a lower signal-to-noise ratio than the specified value and about 50 % of the pixels will have a higher signal-to-noise ratio than the specified value,
  • that single number does not have any information about the dominant noise source, neither about the column noise, row noise and/or pixel noise,
  • the fixed-pattern noise is not included in the definition of SNR. The argumentation very often heard is that fixed-pattern noise can be easily corrected, but any correction or cancellation of fixed-pattern noise may increase the level of the temporal noise and will reduce the signal-to-noise ratio,
  • in the case the sensor is used for video applications, very often the photon shot noise is omitted in the total noise ?out, and actually the SNR listed in the data sheets is much higher than what the reality will bring. If the sensor is used for still applications, mostly the photon shot noise is included in the total noise ?out,
  • in a photon-shot noise limited operation of the sensor, the noise ?out is by definition equal to the photon shot noise, and the maximum SNR that can be delivered by the sensor will be :

SNRmax = 20·log(Ssat/?Ssat) = 20·log(?Ssat) = 10·log(Ssat)

With :

  • SNRmax : maximum signal-to-noise ratio [dB],
  • Ssat : saturation output signal of the sensor [e],
  • the various noise sources present in a sensor do (strongly) depend on temperature, so will the SNR. There is not a single noise source that is becoming better (= lower noise) at higher temperatures.  But in most data sheets the SNR is specified at room temperature.  Be aware that sensors that are not cooled or temperature stabilized, will run at a higher temperature than room temperature due to the self-heating of the sensor in the camera.  This effect will automatically reduce the SNR below the specified numbers in the data sheet.

 

In conclusion : the SNR specified and its value found in data sheets can never be reached in real imaging situations by all pixels because it is an average number, the fixed-pattern noise is not taken into account, the self-heating of the sensor lowers the SNR, and moreover, in many video applications the photon shot noise is omitted.

 

Albert, 16-12-2016.

DYNAMIC RANGE (DR)

Friday, December 2nd, 2016

The Dynamic Range (DR) of an imager gives an indication of the imager’s ability to resolve details in dark areas as well as details in light areas in the same image.  It indicates what is the largest signal that can be detected versus the smallest signal that can be detected.

Mathematically it is defined as :

DR = 20·log(Ssat/?read)

with :

  • DR : dynamic range [dB],
  • Ssat : saturation signal of the sensor [DN or V or e],
  • ?read : noise in dark [DN or V or e].

Notice that :

  • the saturation level and the noise in dark need to be expressed in the same way : in digital numbers (DN), in Volts (V) or in number of electrons (e),
  • the noise in dark is the total temporal noise contribution of all electronic parts that are included in the readout chain, starting in the pixel.

A few important remarks w.r.t. dynamic range :

  • the dynamic range specified for an imager mostly is a single number that should be valid for all pixels. Because the pixels are analog in nature, they all differ (a little bit) from each other, and in principle about 50 % of the pixels will have a lower dynamic range than the specified value and about 50 % of the pixels will have a higher dynamic range than the specified value,
  • the noise in dark does not contain any noise related to the exposure of the imager, for instance dark current shot noise. So in reality the noise present in the output signal will always be higher than the one used in the calculation of the dynamic range because of the presence of dark-current shot noise.  Moreover, dark-current related noise source are strongly dependent on the integration time,
  • in normal operation of an imager, its so-called junction-temperature of the sensor is always higher than room temperature at which the dynamic range is specified. Temperature has a serious impact on the noise level and consequently on the dynamic range,
  • fixed-pattern noise is not included in the definition of DR. The argumentation is that fixed-pattern noise can be easily cancelled, but any correction or cancellation of fixed-pattern noise may increase the level of the temporal noise and will reduce the dynamic range again.

In conclusion : the DR specified and its value found in data sheets is a number that only has a theoretical value, it can never be reached by all pixels in real imaging situations, because it is an average number for all pixels.  Very often it can even not be reached by any of the pixels, because it does not take into account any exposure, neither any temperature effects or fixed-pattern noise issues.

Albert, 02-12-2016.

Harvest Imaging Forum 2016 : Update

Tuesday, October 25th, 2016

The Harvest Imaging Forum 2016 is almost sold out : there are only 4 seats left in the session of December 8th and 9th 2016.  In contradiction to the Forum 2015, there will be no extra session organized in coming January !  First come, first served !

Albert, 25-10-2016.

Training Last Week in Dresden (Germany)

Monday, October 17th, 2016

Last week I taught the 5-days class “Digital Imaging” organized by CEI.  This time the training took place in Dresden (Germany).  For the very first time we added a company visit to the training (after teaching hours).  We had the luxury to be invited by Aspect Systems to visit their premises in Dresden.  After the course on day3, taxis were organized by CEI to bring the participants to Aspect Systems.  We were welcomed by Marcus Verhoeven, one of the company’s founders.  Marcus explained and showed us the activities of Aspect Systems.  In the imaging community, Aspect Systems is known for their test services, but actually they do much more than that.  Aspect Systems is developing hardware, software, algorithms, optics, mechanics for imaging applications, independent of the final application of the systems (can be testing, evaluation, or other purposes).

We limited the visit to 1 hour, not to overload the course participants too much with technical information after already 3 days of training.  But afterwards that limitation seemed to be a mistake.  Everybody was so enthousiast about the visit and the contact with the real imaging world, that the only complain we got was that the length of the visit : too short.

With this blog, I want to thank Marcus Verhoeven and his co-workers for their time and hospitality to have us at their premises.  Hopefully we can repeat this company-visit experiment when we are again in Dresden for another training.  But then for sure, we spend some more time at Aspect Systems.

Thanks Marcus and team, success with your business !

Albert, 17-10-2016.

Harvest Imaging Forum 2016

Thursday, October 13th, 2016

For those of you who are still interested in the Harvest Imaging Forum 2016 : there are only 6 seats left in the session of December 8th and 9th 2016, before the forum is sold out.  In contradiction to the Forum 2015, there will be no extra session organized in coming January !  First come, first served !

Albert, 13-10-2016.

AutoSens 2016 in Brussels

Thursday, September 22nd, 2016

Yesterday morning (Sep. 21st, 2016) I attended a few sessions of AutoSens 2016 in Brussels.  It is a new conference organized by the people who started and grew Image Sensors Auto at the time they were still working for Smithers.  AutoSens was very well attended and was very well organized in a great setting, namely in the Auto-Museum in Brussels.  Excellent choice !

In the morning sessions, there were 3 papers related to image sensors.  Pierre Cambou (YOLE Development) talked about new developments in mobile and their spin-off to automotive applications, Daniel van Nieuwenhove (Softkinetic) gave a nice overview of 3D imaging technologies and how the Softkinetic’s solutions fit into this landscape, and Tarek Lule’s (ST) presentation was about a HDR flicker-free CMOS image sensor.  The latter one had the most technical information, although Tarek did not give any details about the pixel architecture.  But what I understood from his talk is the following : the pixels are making use of multiple photodiodes :

  • a large photodiode is capturing information in an continuous mode during the exposure time, with a high sensitivity and is basically “looking” after details in the darkest parts of the image,
  • a small photodiode that is capturing information in a chopped mode : during the exposure time the photodiode is active during short periods of time and is inactive during the remaining time of the chopping period, it sums the signals obtained during these short periods of time.  In this way, information is collected AT THE SAME TIME as the large photodiode, but because of the size and of the chopping, the diode “looks” after details in the mid-range of the image, motion artefacts can be avoided,
  • a small photodiode that is capturing information in a chopped mode : during the exposure time the photodiode is active during VERY short periods of time and is inactive during most of the time of the chopping period, and it sums the signals obtained during these very short periods of time.  In this way, information is collected AT THE SAME TIME as the large photodiode, but because of the size and the chopping, the diode “looks” after details in the high-range of the image, motion artefacts can be avoided.  The chopping frequencies for the two smaller diodes is the same, the only difference is the duty cycle of active and non-active.

Apparently the pixel needs three photodiodes, but because of the chopping, the work that needs to be done by the two smaller photodiodes can be done by a single one in combination with an appropriate time-multiplexing between the short and very short active times within the chopping period.  So, the pixel is based on two photodiodes in combination with a few storage nodes.  More information was not revealed …

Conclusion : clever idea to make a flicker-free imager with a high-dynamic range (quoted 145 dB).  Not many performance numbers were given, but the overall working of the device was shown by means of a video.  Looking forward to learn more about this device !

 

Albert, 22-09-2016.

40 YEARS AGO.

Thursday, September 22nd, 2016

Most probably several readers of this blog were not yet born in September 1976, but exactly 40 years ago I started my career in solid-state imaging.  Of course I could never ever have guessed that 40 years later I would be still involved in the same discipline.

When I was facing the start of my last year of the MSc-EE studies (at the Catholic University of Leuven, Belgium) in September 1976, I had to choose a MSc thesis subject.  Purely by coincidence I found an MSc thesis project in the team of Gilbert Declerck and under the daily guidance of Jan van der Spiegel, being a PhD candidate in the CCD group.  The topic was based on the development of the hardware around a bi-linear CCD of 256 elements.  The digital driving pulses as well as the analog signal processing needed to be designed and needed to be built on a breadboard.  The CCD needed to be synchronized to a rotating drum, just to show the principle and capabilities of the imaging device.  At that moment, 40 years ago, a bi-linear device of 256 elements was already something special.

I remember that another PhD candidate, being Peter Schreurs, explained to me the basic principles of a CCD.  I still can recall in which lab space it was.  It was Jan van der Spiegel who explained to me the technology of the CCDs and showed me how he made the design and lay-out of the devices.  At that time, CCD image sensors were designed and fabricated in the clean room of the ESAT laboratory (Electonics, Systems, Automation, Technology).  The environment and the atmosphere in the basement of the EE-building was a great stimulation in the learning process in the field of the “young” semiconductor technology.  During the 9 months of the MSc thesis project, and especially for the hardware part of the task, I worked very closely together with Tony van Nuland.  He thought me the practical ins and outs of digital and analog circuitry.  Afterwards seen, this was the very basic start of a long and lasting career in solid-state imaging.

Gilbert Declerck, the promotor of my MSc thesis, became later the CEO of IMEC; Jan van der Spiegel, my daily supervisor became professor at Penn State University in Philadephia (USA); Peter Schreurs who explained to me the working principles of CCDs started a career at Agfa and Tony van Nuland who helped me with the hardware became a specialist in the field of ion implantation and focused-ion beam techniques at the university’s ESAT laboratory.  As you can judge, I was in good company !  Thanks to all of you !

 

Albert, 22-09-2016.

BUTTING versus STITCHING (3)

Tuesday, August 9th, 2016

The stitching story started in the previous blog is not yet complete.  Further explanation deals with one-dimensional versus two-dimensional stitching, and single reticle versus multiple reticle stitching.

The difference between one-dimensional and two-dimensional stitching is straight forward.  If the stitching is done in one direction (vertical OR horizontal), it is known as one-dimensional stitching, if the stitching is done in two directions (vertical AND horizontal), it is known as two-dimensional stitching.  It should be clear that two-dimensional stitching gives the designer much more freedom in his/her design task, and does allow any device size to be designed.  Most of today’s lithographic equipment is capable of handling two-dimensional stitching, but in the earlier days, some type of alignment machines did allow only one-dimensional stitching.  For image sensors, the very first one-dimensional stitching was done by E2V, while the very first two-dimensional stitching was realized by Philips Research Labs.

Another important discussion is the restriction to single reticle stitching or the option for multiple reticle stitching.  If the field of view of the lithographic machine is limited, it should be clear that (to limit the amount of stitchlines in the active imaging area) the full reticle size should be devoted to an array of pixels.  Consequently all peripheral parts and blocks need to be put on a second reticle.  This design strategy is known as multiple reticle stitching.  Unfortunately most fabs are not so happy with multiple reticle stitching because the reticles have to be exchanged during the exposure of the wafers.  This is time consuming, puts a burden on the use of the equipment and is costing a lot of extra money.  For that reason most fabs (if they offer stitching at all) prefer single reticle stitching.  Another important factor to avoid multiple reticle stitching is the cost of the extra mask set.  For more advanced CMOS processes, the mask cost is not negligible anymore compared to the cost of the wafer processing.  It was Philips Research Labs that fabricated for the first time large area imagers based on multiple reticle stitching.

As a consequence, single reticle stitching is much more common than multiple reticle stitching.  In that case, the active imaging array together with the peripheral blocks need to fit on a single reticle size, which leaves a smaller area for the pixel array (compared to a full reticle size in the case of multiple reticle stitching).  Resulting in more exposures for the active area, more stitch lines, more processing time and more expensive processing.  On the other hand, one needs only one mask set.

Altogether in the case of large-area imagers, very often a designer likes to go for multiple reticle, two-dimensional stitching (to avoid too many stitch lines), while most fabs prefer to avoid stitching at all.  It is not always an easy exercise to find an optimum between these two extremes.  There are only a very limited number of fabs/foundries in the world that do allow their customers to go for two-dimensional, multiple reticle stitching.  If stitching is offered at all, then the most common option is single reticle stitching in combination with one-dimensional stitching.

Albert, 09-08-2016.

BUTTING versus STITCHING (2)

Friday, July 8th, 2016

After the butting, it is now stitching time !

To explain the stitching, Figure 1 is included that shows the complete top-level design/lay-out of an image sensor (CCD or CMOS).

 

Figure 1 : Sketch of the top-level design of an image sensor.

One can recognize the following parts :

  • The pixel matrix, consisting of r by s pixels,
  • The left (L-) driving and right (R-) driving electronics,
  • Some extra electronics at the top (e.g. biasing circuitry, etc.),
  • The readout part (consisting for instance of CDS, PGA, ADC and other beautiful stuff),
  • And 4 blocks at the corner, they can contain timing generation, reference generation, maybe ADCs if they are not implemented on the columns, etc.

During the normal design phase, one or more designers take care about all these separate blocks and at the end of the design process, all blocks are nicely put together, the design is checked and finally the complete lay-out is sent to the mask shop to fabricate the masks.  Such a mask set normally consists of several reticles/masks, and in most cases, every layer of the lay-out (active area, implants, poly-layer, contact openings, vias, metal layers, etc.) is put on a separate reticle/mask.  As an example, a “simple” CMOS imaging process consists of 30 reticles (or more).  The maximum useful area of a reticle, defined by the field of view of the lithographic equipment, is about 25 mm x 25 mm (the numbers given here are indications and differ from machine to machine).

The limitation of the reticle area is defining the maximum size of the chip, unless stitching is applied.  Stitching is a technology that allows the designer to fabricate an image sensor that is larger than the field of view of the lithographic equipment, still making use of reticles that fit into the field of view of that equipment.  Moreover, the size of the sensor will only be limited by the wafer size (and the budget of the customer).

To realize a sensor larger than the reticle size, the following strategy is applied : the very last stage in the design, being putting together the major building blocks as shown in Figure 1, is omitted.  The building blocks themselves are put on the reticle as individual pieces of the design.  This concept is shown in Figure 2.

Figure 2 : Isolated building blocks put separately on the reticle.

These building blocks cannot be used as separate circuits, they can only operate in connection with each other.  By appropriate programming the lithographic tool, each individual block of the reticle can be selected (by means of mechanical blading) and can be transferred into the photoresist on the wafer.  In this way it is possible to “stitch” the various blocks together on the wafer during the lithographic process.  But because the blocks are stitched during the wafer manufacturing process, it is also possible to make other configurations (than the one shown in Figure 1) by means of multiple use of the various blocks.  An example is illustrated in Figure 3, where the matrix of pixels is repeated 6 times.  To complete the sensor, several other blocks need to be repeated twice or threefold as well.  And in this way, an image sensor can be fabricated that is larger than the field of view of the reticle.

 

Figure 3 : Extending the size of the sensor beyond the reticle field of view.

If the design of the various blocks is carefully done to avoid stitching artifacts, then actually the final device shown in Figure 3, will look like the one illustrated in Figure 4.  The stitch lines will no longer be visible or noticeable, and the end result of the stitching technology is a large-size, monolithic image sensor.

 

Figure 4 : Final imaging array after stitching.

These days, stitching is widely applied in the digital imaging industry.  Various lithographic tools have different sizes of the reticle field of view, but in general terms, one can state that all full-format imagers (36 mm x 24 mm) or larger are stitched devices.

Albert, 08-07-2016.