.

Merry Christmas and Happy New Year

December 23rd, 2015

Good Bye 2015 ! 

At the very end of 2015, it is good to take a look backwards to see what 2015 has brought.

Actually I can start with the same statement as last year : “It was a busy year for Harvest Imaging”.  Several imaging trainings were organized, in-house as well as public courses.  Thanks to CEI, FSRM and Framos who organized the public or open courses.  Thanks to all my customers for the in-house courses.   Thanks to all participants for being there, because without participants there will be no trainings !  And apparently the imaging business is doing very well, because a lot of companies have hired new imaging engineers and consequently they are again asking for more trainings in 2016.  That is of course the best feedback I can get : returning customers. 

The Harvest Imaging Forum 2015 was targeting “3D Imaging with ToF”.  I was very happy to get the possibility to attract David Stoppa as the speaker for this forum.  The two sessions in December got very good feedback from the participants, and in January 2016 a third (and last) session will be organized.  The only drawback : the expectations for the third session are high, so David knows what to do.  At this moment I can announce already that also n 2016 another Harvest Imaging forum will be organized.  Topic and speaker(s) are still to be defined.

In 2015 Harvest Imaging came with a new product on the market : a reverse engineering report of a particular imaging feature present in a commercial camera.  The first reverse engineering report was devoted to Phase Detection Auto-Focus Pixels.  This is a very interesting topic, very well described in patents of several Japanese camera manufacturers, but the technical results obtained by these PDAF pixels are hardly described in public literature.  For that reason Harvest Imaging started working on it.  At the end of the road, a nice report is ready and is for sale.  And one of the conclusions is that it took more time and effort than originally planned.  (This is in line with the 3rd law of Hofstadter : A project always takes longer than originally scheduled, even if you take the 3rd law of Hofstadter into account).  But it still was worthwhile doing it.  We learned a lot from the PDAF measurements.  And if a “good” subject pops up in 2016, a new reverse engineering effort will start.

If I look back to the activities that took place in relation to the imaging society (IISS), then Harvest Imaging took a very active role in the organization of the 2015 International Image Sensor Workshop.  In close cooperation with imaging peers (Johannes Solhusvik and Pierre Magnan) a successful workshop was organized.   About 100 technical papers were presented during almost 4 days.  But also remarkable : during the social activity of the workshop, all 180 participants went out for biking in an old cave.  Their reactions afterwards were quite interesting : most of them enjoyed it very much, a few much less.

As you can read, 2015 was very busy and was interesting because it was so diverse.  I do hope that the customers of Harvest Imaging can close 2015 with a big smile on their face.  I would to thank all of them for the business in 2015 and I am looking forward to serve them again in 2016.

Welcome 2016 !  Looking forward to another successful year, although without an IISW, but with a new special issue of IEEE Transactions on Electron Devices on Image Sensors coming up !

Wishing all my readers a Merry Christmas and a Happy New Year.  “See” you soon.

Albert, 23-12-2015.

TDI presentations at 2015 CMOS Workshop CNES (Toulouse, Fr).

December 4th, 2015

Time-Delayed Integration or TDI in CMOS seems to be a hot topic (at least for space applications), but it also still is a challenging architecture to build in CMOS technology.  At the CNES CMOS image sensor workshop in Toulouse (held about 10 days ago), there were several presentations on CMOS-TDI, here is an overview.

C. Virmentois presented the CNES work on TDI.  They finished several projects with ESPROS (CCD on CMOS), IMEC (CCD on CMOS) and ST (digital CMOS TDI).  At this moment work is going on in the field of a multi-spectral TDI with large pixels, also this latest device is based on a digital TDI.

W.r.t. to the chip(s) made at ESPROS, the following details were given :

  • Fully depleted, BSI,
  • 7.5 um and 6.5 um pixel pitch for monochrome,
  • 26 um and 52 um pixel pitch for multi-spectral,
  • noise level of 600 uV,
  • dark current of 2.6 nA/cm2 at 20 oC,
  • conversion gain : 10… 15 uV/e-,
  • CTI : 2.10-3,
  • INL < 1.5 %,
  • FWC : 92 ke- at 1 V,
  • QE > 70 % n-IR.

A second chip made at ESPROS showed improved results, such as :

  • Noise : 350 uV,
  • Dark current : 1ke-/s (= 10 x less),
  • CTI : … 1.10-4

During the presentation it was not mentioned which part of the processing was done by ESPROS and/or which part of the processing was done by a third party.

 

M.-Y. Yeh of NAR Labs reported the work on TDI done in his lab :

  • 6 lines, 2 PAN + 4 multi-spectrum,
  • 7.5 um pixel pitch for PAN and 30 um pixel pitch for multi-spectrum,
  • Based on 4T BSI 2.5 um pixels, made in TSMC 0.11 um process,
  • Stitched with 8 blocks next to each other, chip width : 12.288 cm.

 

F. Mayer of e2v mentioned that the first TDI made by his company was already done in 2010 with charge transfer in a 0.18 um CMOS process.  Later more CCD-like devices were made.

For the digital domain, Frederic mentions :

  • Too much load for the ADC,
  • Motion MTF issues,
  • Dynamic range is OK, but the noise is pretty high.

For the charge domain :

  • Limitation in full-well capacity.

The combination of digital and charge domain can overcome a number of drawbacks, but the architecture will be pretty complex.

The first generation charge transfer TDI was built on a surface channel CCD, the second generation was provided with a buried channel.

Neither in this presentation the fab was mentioned that fabricated the CMOS wafers.

 

Ben-Ari of SemiConductors Devices gave a large list of performance data of the TDI’s made by his company.

In summary :

  • 4 independent TDI arrays,
  • digital running TDI with global shutter,
  • 0.18 um technology,
  • Chip size : 84 x 16 mm2, 2600 pixels x 8 to 64 pixels,
  • Full well : 300 ke-, < 80 e- noise, and 72 dB dynamic range,
  • 50 … 10,000 lines/s,
  • Dark current < 400 e-/s at 25 oC,
  • Single slope ADC,
  • Stitched in 1 dimension,

Current status of these devices : BSI delivered, wafer sort done with good yield.

 

Boulenc gave an overview of IMEC’s TDI status :

  • 0.13 um, CMOS flow with 3.3 V and 1.5 V power supply,
  • Generation 1 (see also CNES presentation) with lateral AB and dedicated implants at the output to make it BSI compatible,
  • Generation 2 : 5 um pixel size, 1025 x 512 pixels, gate spacing between 100 nm and 180 nm, CF : 25 uV/e-, 2.5 nA/cm2 dark current at 25 oC, 0.5 mV noise floor and 17 ke- full well capacity,
  • Generation 3 is in development.

 

In conclusion : a lot of interesting work is going on in the field of TDI-CMOS, but apparently none of the developments has yet resulted in commercially available devices with a performance that matches the existing TDI-CCD performance.  It is more difficult than expected to beat the TDI-CCD noise-free charge transfer in sub-pixel steps in combination with a low dark current.  Depending on which side of the table you are sitting, this can be bad news (for the customers eagerly waiting for TDI-CMOS) or this can be good news (for the engineers, because there are still enough challenging developments ahead of us).

Albert, 26-11-2015.

First reactions on the PDAF report

November 20th, 2015

Here are some very first reactions of people who read the PDAF report :

“I went through the report and I find it well written with a lot of information” (R.P.)

“Great systematic analysis” (M.G.)

“We find the content useful for our work” (D.A.)

Albert, 20/11/2015.

 

 

 

and the report is fine.

MEASUREMENT OF PDAF PIXELS

November 11th, 2015

Over the last couple of months, Harvest Imaging performed measurements on the PDAF pixels present in an existing, commercially available camera.  The results of these measurements, including explanations for the PDAF performance/behaviour are written down in an extensive report.  This report is now available.  Unfortunately not free of charge, because quite some resources were invested in developing hard- and software tools to extract the right data directly out of the sensor.  If you are interested in the report, please drop me a mail (info “at” harvestimaging “dot” com), and I am happy to send you a quotation to get a copy of the report.

Find below the table of contents of the report, as well as a list of figures included in the report.

Kind regards, Albert.

 

Table of Contents

List of Figures

Introduction

Working principle of PDAF pixels

Theoretical implementation of PDAF pixels

Practical implementation of PDAF pixels

From the theory to the reality

Measurement 1 : influence of F-number

Measurement 2 : influence of the object distance

Measurement 3 : influence of the object angle

Measurement 4 : influence of the PDAF location on the sensor

Measurement 5 : influence of the object colour

Conclusions

Acknowledgement

References

           

List of Figures

Figure 1. Imaging with a positive lens

Figure 2. Requirement to have an image in-focus at the surface of the image sensor

Figure 3. illustration of rear focus, in-focus and front focus

Figure 4. Illustration of two different rear focus situations

Figure 5. Illustration of two different front focus situations

Figure 6. Optical ray formation from the object till the photodiode of an image sensor

Figure 7. Optical ray formation from the object till the partly optically-shielded photodiodes/pixels of an image sensor

Figure 8. Aptina’s MT9J007C1HS architecture with 9 rows containing auto-focus pixels based on phase detection

Figure 9. Microphotograph of one of the AF rows

Figure 10. Magnified view of an AF row

Figure 11. Microphotograph of an AF row

Figure 12. Sensor architecture indicating the various AF lines as well as the different zones used to read the sensor

Figure 13. Image taken from a random scenery with the AF option switched ON

Figure 14. Analysis of the signals of AF-line 5 in zone 5

Figure 15. Image taken from the same scenery as in Figure 13 with the AF option switched OFF and manually focused on the “macro” position

Figure 16. Analysis of the signal of AF-line 5 in zone 5 in the case the AF system is forced to “macro” position

Figure 17. Image taken from the same scenery as in Figure 13 with the AF option switched OFF, and manually focused on the “infinity” position

Figure 18. Analysis of the signals of AF-line 5 in zone 5 in the case the AF system is forced to “infinity” position

Figure 19. Odd and even PDAF signal for an object place 50 cm in front of the camera and the lens switched to auto-focus “ON”

Figure 20. Odd and even PDAF signal for an object placed 50 cm in front of the camera and the lens focusing on “infinity”

Figure 21. Odd and even PDAF signal for an object place 50 cm in front of the camera and the lens focusing on “macro”

Figure 22. Depth-of-field as a function of the object distance for 3 F-numbers, the dotted lines indicate the corresponding hyper-focal distances

Figure 23 PDAF pixel shift as a function of F-number for an object 50 cm in front of the camera and auto focusing

Figure 24 PDAF pixel shift as a function of F-number for an object 50 cm in front of the camera and focusing at “infinity”

Figure 25 PDAF pixel shift as a function of F-number for an object 50 cm in front of the camera and focusing at “macro”

Figure 26 Front-focus situation

Figure 27. PDAF pixel shift as a function of object distance and auto-focus setting of the camera, with F2.8

Figure 28. PDAF pixel shift as a function of object distance and auto-focus setting of the camera, with F5.6

Figure 29. PDAF pixel shift as a function of object distance and auto-focus setting of the camera, with F11

Figure 30. PDAF pixel shift as a function of object distance, lens auto-focus setting on “infinity” and with F2.8

Figure 31. PDAF pixel shift as a function of object distance, lens auto-focus setting on “infinity” and with F5.6

Figure 32. PDAF pixel shift as a function of object distance, lens auto-focus setting on “infinity” and with F11

Figure 33. PDAF pixel shift as a function of object distance, lens auto-focus setting on “macro” and with F2.8

Figure 34. PDAF pixel shift as a function of object distance, lens auto-focus setting on “macro” and with F5.6

Figure 35. PDAF pixel shift as a function of object distance, lens auto-focus setting on “macro” and with F11

Figure 36. PDAF pixel shift as a function of object distance, lens focusing fixed at 60 cm and F2.8

Figure 37. PDAF pixel shift as a function of object distance, lens focusing fixed at 60 cm and F5.6

Figure 38. PDAF pixel shift as a function of object distance, lens focusing fixed at 60 cm and F11

Figure 39. PDAF pixel shift as a function of object angle and auto-focus setting of the camera, with F2.8

Figure 40. PDAF pixel shift as a function of object angle and auto-focus setting of the camera, with F11

Figure 41. PDAF pixel shift as a function of object angle, lens focusing on “infinity” and with F2.8

Figure 42. PDAF pixel shift as a function of object angle, lens focusing on “infinity” and with F11

Figure 43. PDAF pixel shift as a function of object angle, lens focusing on “macro” and with F2.8

Figure 44. PDAF pixel shift as a function of object angle, lens focusing on “macro” and with F11

Figure 45. PDAF pixel shift as a function of the PDAF location in readout zone 5 and auto-focus setting of the camera, with F2.8

Figure 46. PDAF pixel shift as a function of the PDAF location in readout zone 5 and auto-focus setting of the camera, with F11

Figure 47. PDAF pixel shift as a function of the PDAF location in readout zone 5 and lens focusing on “infinity” and with F2.8

Figure 48. PDAF pixel shift as a function of the PDAF location in readout zone 5 and lens focusing on “infinity” and with F11

Figure 49. PDAF pixel shift as a function of the PDAF location in readout zone 5 and lens focusing on “macro” and with F2.8

Figure 50. PDAF pixel shift as a function of the PDAF location in readout zone 5 and lens focusing on “macro” and with F11

Figure 51. Location of the various PDAF regions under test

Figure 52. PDAF pixel shift as a function of the PDAF location on AF-line 5 and on the diagonal with auto-focus setting of the camera and F2.8

Figure 53. PDAF pixel shift as a function of the PDAF location on AF-line 5 and on the diagonal with auto-focus setting of the camera and F11

Figure 54. PDAF pixel shift as a function of the PDAF location on AF-line 5 and on the diagonal with focus set to “infinity” and F2.8

Figure 55. PDAF pixel shift as a function of the PDAF location on AF-line 5 and on the diagonal with focus set to “infinity” and F11

Figure 56. PDAF pixel shift as a function of the PDAF location on AF-line 5 and on the diagonal with focus set to “macro” and F2.8

Figure 57. PDAF pixel shift as a function of the PDAF location on AF-line 5 and on the diagonal with focus set to “macro” and F11

Figure 58. Pulses used to measure the PDAF pulse shifts in zones 2 to 5 on the AF-line 5, lens focus at “infinity” and F2.8

Figure 59. Incoming rays for a PDAF pair at the edge of the sensor

Figure 60. Pulses used to measure the PDAF pulse shifts in zones 2 to 5 on the AF-line 5, lens focus at “macro” and F11

Figure 61. Odd and even pulses in AF-line 5, in AF-line 5 + 1 line and in AF-line 5 + 2 lines, with green light input, F2.8 and focusing at “infinity”

Figure 62. Odd and even pulses in AF-line 5, in AF-line 5 + 1 line and in AF-line 5 + 2 lines, with green light input, F2.8 and focusing at “infinity”

Figure 63. Odd and even pulses in AF-line 5, in AF-line 5 + 1 line and in AF-line 5 + 2 lines, with green light input, F2.8 and focusing at “infinity”                                                                                     

Figure 64. PDAF pixel shift as a function of object colour and auto-focus setting of the camera, with F2.8

Figure 65. PDAF pixel shift as a function of object colour and auto-focus setting of the camera, with F11

Figure 66. PDAF pixel shift as a function of object colour, lens focusing on “infinity” and with F2.8

Figure 67. PDAF pixel shift as a function of object colour, lens focusing on “infinity” and with F11

Figure 68. PDAF pixel shift as a function of object colour, lens focusing on “macro” and with F2.8

Figure 69. PDAF pixel shift as a function of object colour, lens focusing on “macro” and with F11

 

Harvest Imaging Forum : third session

October 15th, 2015

Just an update about the registrations for the third session (Jan. 11-12, 2015) of the Harvest Imaging Forum (3D imaging with ToF) : 5 seats are left and then it is over !

If you are interested to attend, do not hesitate too long to register through the website of Harvest Imaging.

See you,

Albert, October 15, 2015.

THIRD Session Harvest Imaging Forum 2015

September 21st, 2015

Since last week the first two sessions of the Harvest Imaging Forum 2015 are sold out.  Originally the idea was NOT to have a third session.  But after many people have contacted me, I am happy to change my mind.  Our speaker David STOPPA agreed to run a 3rd session on January 11&12, 2016 at the same location.  Registration for the third session is open via the Harvest Imaging website.

Thanks for your interest, thanks for your warm reactions !

Albert, 21-9-2015.

How to Measure Anti-Blooming (3)

September 21st, 2015

After discussing the anti-blooming measurement in vertical direction, this time the measurement of the anti-blooming in horizontal direction is described.  But in principle it can be very short : exactly the same procedure is applied in the horizontal direction as explained in the previous blog.

Figure 1 shows images captured at various levels of illumination (by changing the exposure time) : top left at the onset of saturation, top right at 10 times overexposure, bottom left at 100 times overexposure and bottom right at 1000 times overexposure.

Figure 1 : Images of the test target at different exposure levels.

A simple software tool is developed to check for every exposure time, on which column the vertical black-white crossing is occurring in the images.  If the pixels are not saturated yet, the software tool simply outputs column number “500”, which actually does not exist in the left half of the image.  As soon as at a particular illumination level of the pixels in the white region reaches 75 % of saturation, the measurement tool outputs the column number at which the black-white transition occurs.  If the overexposed area reaches the side of the image, the output of the measurement tool is equal to “0”.  The result of this analysis is shown in Figure 2.

 

Figure 2 : Position of the black-white transition (indicated as column number) as a function of exposure time.

In Figure 2, from left to right, the following information is available :

  • For small exposure times (< 1 ms), the white pixels are not yet saturated, this is indicated by the column value equal to “500”,
  • For an exposure value of 1.28 ms, saturation occurs and the black-white transition is located at column number “394”,
  • From this moment onwards the large white area starts growing slowly due to all kind of optical artefacts, listed already in previous blog,
  • For exposure times larger than 120 ms, the area of the white spot grows very fast, as can be seen in the graph.  This change in “speed” is due to the blooming artefact that apparently occurs at very high exposure levels.

To calculate a number for the anti-blooming capabilities of the sensor, the same data as present in Figure 2 is shown again on a linear scale as illustrated in Figure 3.

 

Figure 3 : The same information as already illustrated in Figure 2 is shown again, but now on a linear scale.

The two important regions (saturated but no blooming and saturated with blooming) are approximated by means of a linear regression line, and as can be seen, below 120 ms exposure time, blooming plays no important role, but above 120 ms exposure time, blooming is dominating over all other artefacts.  The exposure time of 120 ms seems to be a cross-over exposure time.  (Of course this number of 120 ms is depending on the illumination level and has no further meaning.)

The anti-blooming capability is then defined as the ratio of the exposure time at which saturation is reached (texp = 1.28 ms) and the cross-over exposure time (texp = 120 ms), resulting in an anti-blooming capability of 94 times overexposure.

In conclusion : the anti-blooming capabilities in horizontal direction differ (a bit) from the capabilities in vertical direction.  This can be explained by a different boundary definition (lay-out, technology, isolation), between neighbouring pixels in horizontal compared to vertical direction.

Albert, 21-09-2015.

How to Measure Anti-Blooming (2)

August 31st, 2015

This blog will focus on the measurement of the anti-blooming capabilities of a monochrome sensor.  As known, blooming occurs when a (group of) pixel(s) is overexposed and the photodiode can no longer store all generated charges.  With an anti-blooming structure inside the pixel, the excessively generated charges can be drained, e.g. excessive electrons can escape through the reset transistor to the power supply.  But every anti-blooming structure has its limitations, and with this measurement we try to find the limits of the anti-blooming structure present in a pixel.

What is checked with this measurement is simply the size of an overexposed sensor area.  If the illumination level is increased, ideally the size of such an overexposed area should stay constant even with an overexposure.  But in reality and while the illumination level is increasing, the size of the overexposed area will grow due to several mechanisms :

  • Diffraction at the various edges of the metal lines above a pixel will “guide” photons to neighbouring pixels,
  • Multiple reflections in the multi-level layer structure above the pixels can also “guide” photons to neighbouring pixels,
  • Fresnel reflections on the sensor surface and on the lens surface can result in ghosting structures,
  • Diffraction and reflections at the edges of the iris/diaphragm present in the optical system,
  • Optical and electrical cross-talk between the pixels,
  • Light piping underneath the metal lines and/or metal shields,
  • Blooming effects after the pixels are saturated and the anti-blooming is no capable of handling the excess charges.

All these effects are proportional to the amount of light that comes to the sensor.  In the measurements, the amount of light to the sensor is modulated by changing the exposure time.  That means that all these effects can be written down with a formula that contains a linear coefficient in relation to the exposure time.  This is true for all abovementioned effects, except for the blooming.  The blooming also has a linear relationship with exposure time, but with a particular threshold.  Below a certain exposure time, the pixel will not be saturated or the anti-blooming capabilities are performing well so that no blooming occurs.  Above a certain exposure time, the blooming effect will start and will be added to all other effects that grow the overexposure sensor area.  So the measurement will explore the size of an overexposure area as a function of exposure time and will try to find the knee point at which level the blooming effect occurs.

The measurement is performed as follows (to measure the anti-blooming along columns in vertical direction) :

  • The sensor is illuminated with a target that allows a black-white transition about half-way the sensor height, and the black-white transition horizontally crosses a column in the middle of the sensor (e.g. column 380 out of 752 active columns)
  • The location of the black-white transition is monitored in the images generated by the sensor.  To do so, the balck-white transition is defined at a level of 75 % of the white part of the test target (75 % is randomly chosen, any other value can do the job as well),
  • The illumination of the target is kept constant (white fluorescent DC light), and to get different light levels on the sensor, the exposure time of the imager is changed from very small values to very large values,
  • While changing the exposure levels, the location black-white transition is constantly calculated to monitor the growth of the overexposed area.

Figure 1 shows one of the images captured at the onset of saturation, Figure 2 illustrates the situation at 10 times overexposure, Figure 3 is the result while overexposing the sensor 100 times, and finally Figure 4 illustrates a factor of 1000 times overexposure.

 

Figure 1 : Image of the test target at the moment the sensor starts to saturate [0015].

 

Figure 2 : Image of the test target at the moment the sensor is 10 times overexposed [0019].

 

Figure 3 : Image of the test target at the moment the sensor is 100 times overexposed [0029].

 

Figure 4 : Image of the test target at the moment the sensor is 1000 times overexposed [0039].

A simple software tool is developed to check for every exposure time, on which row the horizontal black-white crossing is occurring in the images.  If the pixels are not saturated yet, the software tool simply outputs row number “500”, which actually does not exist.  As soon as at a particular illumination level the pixels in the white region reach 75 % of saturation, the measurement tool outputs the row number at which the black-white transition occurs.  If the overexposed area reaches the top of the image, like shown in Figure 4, the output of the measurement tool is equal to “0”.  The result of this analysis is shown in Figure 5.

 

Figure 5 : Position of the black-white transition (indicated as row number) as a function of exposure time.

In Figure 5, from left to right, the following information is available :

  • For small exposure times (< 1 ms), the white pixels are not yet reaching 75 % of saturation, this is indicated by the row value equal to “500”,
  • For an exposure value of 1.28 ms, saturation occurs (= 75 %) and the black-white transition is located at row number “224”,
  • From this moment onwards the large white area starts growing slowly due to all kind of optical artefacts, listed already earlier in this blog,
  • For exposure times larger than 200 ms, the area of the white spot grows very fast, as can be seen in the graph.  This change in “speed” is due to the blooming artefact that apparently occurs at very high exposure levels.

To calculate a number for the anti-blooming capabilities of the sensor, the same data as present in Figure 5 is shown again on a linear scale as illustrated in Figure 6.

 

Figure 6 : The same information is shown as already illustrated in Figure 5, but now on a linear scale.

The two important regions (saturated but no blooming and saturated with blooming) are approximated by means of a linear regression line.  And as can be seen, below 174 ms exposure time, blooming plays no important role, but above 174 ms exposure time, blooming is dominating over all other artefacts.  The exposure time of 174 ms seems to be a cross-over exposure time.

The anti-blooming capability is then defined as the ratio of the exposure time at which saturation is reached (texp = 1.28 ms) and the cross-over exposure time (texp = 174 ms), resulting in an anti-blooming capability of 136 times overexposure.

In conclusion : a long story to explain a relative simple measurement.  More anti-blooming stuff to follow.

Albert, 28-08-2015.

How (not) to Measure Anti-Blooming (1)

August 10th, 2015

After several months of silence, here is a new blog about measuring image sensors.  This time the blooming and/or anti-blooming of an imager is analyzed.  Actually in this first blog about blooming, it will be shown how NOT to measure blooming.

Blooming is the effect that is showing up in the case of strong overexposure of the image sensor.  If the pixels are seen as a buckets and the photon-generated electrons are seen as the water contained in these buckets, it is clear that the maximum amount of water that can be stored in the bucket is limited.  If more light is falling on the pixels, more water needs to be stored in the buckets.  But once a bucket is completely filled, any extra water is going to spill over to the neighbouring buckets.  The last effect is being known as blooming.  Any means in the pixel to prevent blooming is called anti-blooming.

The intention of the measurement reported in this blog, is to check out the anti-blooming capabilities of an image sensor.  Ideally this can be done by overexposing a single pixel and check any blooming in the neightbouring pixels, but that is not easy to realize.  An alternative way of measuring the anti-blooming capabilities is to use a colour sensor and illuminate the device with monochrome light. If the sensor is illuminated with blue light, the green and red pixels will have a smaller light sensitivity to blue light and the blue pixel will saturate much faster than the green and red pixels.  Once the blue pixel is saturated, its anti-blooming should become active.  Without anti-blooming, the blue pixel will spill over its charge into the green pixel (direct neighbours) and red pixel (diagonal neighbour).  If spilling occurs, the sensitivity of the green and/or red pixel will increase and this can be measured by monitoring the green and red output signal.

What is explained above is realized and the result is shown in Figure 1.

 Figure 1 : Response of the different colour planes (R, G, B) of a CMOS sensor under illumination with blue light (470 nm).

For the three colour planes, the regression line of the linear response is created as well.  The ratio between B and G response is 4347/1119 = 3.9.  The ratio between B and R response is 4347/140 = 31.  Unfortunately (for the measurement), no change in response can be seen in the G or R channel once the B channel is saturated.  Conclusion the anti-blooming towards direct neighbours is at least a factor 3.9, towards diagonal neighbours is at least a factor 31.

A similar measurement can be done with red light.  The result is illustrated in Figure 2.

 Figure 2 : Response of the different colour planes (R, G, B) of a CMOS sensor under illumination with red light (630 nm).

This time, the ratio between R and G response is 1169/243 = 4.8.  The ratio between R and B response is 1169/135 = 8.7.  Unfortunately (for the measurement), no change in response can be seen in the G or B channel once the R channel is saturated.  Conclusion the anti-blooming towards direct neighbours is at least a factor 4.8, towards diagonal neighbours is at least a factor 8.7.

Finally, the sensor was illuminated with green light, and the 3 colour channels were checked as shown in Figure 3.

 Figure 3 : Response of the different colour planes (R, G, B) of a CMOS sensor under illumination with green light (525 nm).

The ratio between G and B response is 1301/365 = 3.6, the ratio between G and R response is 1301/178 = 7.3.  Also for this situation no blooming artifacts can be found.

In conclusion : the anti-blooming capabilities are at least for direct neighbours a factor of 7.3, for diagonal neighbours a factor of 31.  These numbers are relatively small, but the measurement technique applied is not capable of doing better.  The numbers reported are limited by the characterization method and not by the sensor.  So actually, what is shown in this blog is how NOT to measure the anti-blooming of a sensor, unless your device-under-test has a very poor anti-blooming performance.

Albert, 10-08-2015.

Harvest Imaging Forum 2015 : 3D Imaging with ToF

July 29th, 2015

The first session (10/11 Dec.) of the 2015 Harvest Imaging Forum is SOLD OUT.  Apparently Time-of-Flight is still a hot topic in the field.

There are still seats available for the second session (14/15 Dec.).

Albert, 29-07-2015.