First reactions on the PDAF report

November 20th, 2015

Here are some very first reactions of people who read the PDAF report :

“I went through the report and I find it well written with a lot of information” (R.P.)

“Great systematic analysis” (M.G.)

“We find the content useful for our work” (D.A.)

Albert, 20/11/2015.

 

 

 

and the report is fine.

MEASUREMENT OF PDAF PIXELS

November 11th, 2015

Over the last couple of months, Harvest Imaging performed measurements on the PDAF pixels present in an existing, commercially available camera.  The results of these measurements, including explanations for the PDAF performance/behaviour are written down in an extensive report.  This report is now available.  Unfortunately not free of charge, because quite some resources were invested in developing hard- and software tools to extract the right data directly out of the sensor.  If you are interested in the report, please drop me a mail (info “at” harvestimaging “dot” com), and I am happy to send you a quotation to get a copy of the report.

Find below the table of contents of the report, as well as a list of figures included in the report.

Kind regards, Albert.

 

Table of Contents

List of Figures

Introduction

Working principle of PDAF pixels

Theoretical implementation of PDAF pixels

Practical implementation of PDAF pixels

From the theory to the reality

Measurement 1 : influence of F-number

Measurement 2 : influence of the object distance

Measurement 3 : influence of the object angle

Measurement 4 : influence of the PDAF location on the sensor

Measurement 5 : influence of the object colour

Conclusions

Acknowledgement

References

           

List of Figures

Figure 1. Imaging with a positive lens

Figure 2. Requirement to have an image in-focus at the surface of the image sensor

Figure 3. illustration of rear focus, in-focus and front focus

Figure 4. Illustration of two different rear focus situations

Figure 5. Illustration of two different front focus situations

Figure 6. Optical ray formation from the object till the photodiode of an image sensor

Figure 7. Optical ray formation from the object till the partly optically-shielded photodiodes/pixels of an image sensor

Figure 8. Aptina’s MT9J007C1HS architecture with 9 rows containing auto-focus pixels based on phase detection

Figure 9. Microphotograph of one of the AF rows

Figure 10. Magnified view of an AF row

Figure 11. Microphotograph of an AF row

Figure 12. Sensor architecture indicating the various AF lines as well as the different zones used to read the sensor

Figure 13. Image taken from a random scenery with the AF option switched ON

Figure 14. Analysis of the signals of AF-line 5 in zone 5

Figure 15. Image taken from the same scenery as in Figure 13 with the AF option switched OFF and manually focused on the “macro” position

Figure 16. Analysis of the signal of AF-line 5 in zone 5 in the case the AF system is forced to “macro” position

Figure 17. Image taken from the same scenery as in Figure 13 with the AF option switched OFF, and manually focused on the “infinity” position

Figure 18. Analysis of the signals of AF-line 5 in zone 5 in the case the AF system is forced to “infinity” position

Figure 19. Odd and even PDAF signal for an object place 50 cm in front of the camera and the lens switched to auto-focus “ON”

Figure 20. Odd and even PDAF signal for an object placed 50 cm in front of the camera and the lens focusing on “infinity”

Figure 21. Odd and even PDAF signal for an object place 50 cm in front of the camera and the lens focusing on “macro”

Figure 22. Depth-of-field as a function of the object distance for 3 F-numbers, the dotted lines indicate the corresponding hyper-focal distances

Figure 23 PDAF pixel shift as a function of F-number for an object 50 cm in front of the camera and auto focusing

Figure 24 PDAF pixel shift as a function of F-number for an object 50 cm in front of the camera and focusing at “infinity”

Figure 25 PDAF pixel shift as a function of F-number for an object 50 cm in front of the camera and focusing at “macro”

Figure 26 Front-focus situation

Figure 27. PDAF pixel shift as a function of object distance and auto-focus setting of the camera, with F2.8

Figure 28. PDAF pixel shift as a function of object distance and auto-focus setting of the camera, with F5.6

Figure 29. PDAF pixel shift as a function of object distance and auto-focus setting of the camera, with F11

Figure 30. PDAF pixel shift as a function of object distance, lens auto-focus setting on “infinity” and with F2.8

Figure 31. PDAF pixel shift as a function of object distance, lens auto-focus setting on “infinity” and with F5.6

Figure 32. PDAF pixel shift as a function of object distance, lens auto-focus setting on “infinity” and with F11

Figure 33. PDAF pixel shift as a function of object distance, lens auto-focus setting on “macro” and with F2.8

Figure 34. PDAF pixel shift as a function of object distance, lens auto-focus setting on “macro” and with F5.6

Figure 35. PDAF pixel shift as a function of object distance, lens auto-focus setting on “macro” and with F11

Figure 36. PDAF pixel shift as a function of object distance, lens focusing fixed at 60 cm and F2.8

Figure 37. PDAF pixel shift as a function of object distance, lens focusing fixed at 60 cm and F5.6

Figure 38. PDAF pixel shift as a function of object distance, lens focusing fixed at 60 cm and F11

Figure 39. PDAF pixel shift as a function of object angle and auto-focus setting of the camera, with F2.8

Figure 40. PDAF pixel shift as a function of object angle and auto-focus setting of the camera, with F11

Figure 41. PDAF pixel shift as a function of object angle, lens focusing on “infinity” and with F2.8

Figure 42. PDAF pixel shift as a function of object angle, lens focusing on “infinity” and with F11

Figure 43. PDAF pixel shift as a function of object angle, lens focusing on “macro” and with F2.8

Figure 44. PDAF pixel shift as a function of object angle, lens focusing on “macro” and with F11

Figure 45. PDAF pixel shift as a function of the PDAF location in readout zone 5 and auto-focus setting of the camera, with F2.8

Figure 46. PDAF pixel shift as a function of the PDAF location in readout zone 5 and auto-focus setting of the camera, with F11

Figure 47. PDAF pixel shift as a function of the PDAF location in readout zone 5 and lens focusing on “infinity” and with F2.8

Figure 48. PDAF pixel shift as a function of the PDAF location in readout zone 5 and lens focusing on “infinity” and with F11

Figure 49. PDAF pixel shift as a function of the PDAF location in readout zone 5 and lens focusing on “macro” and with F2.8

Figure 50. PDAF pixel shift as a function of the PDAF location in readout zone 5 and lens focusing on “macro” and with F11

Figure 51. Location of the various PDAF regions under test

Figure 52. PDAF pixel shift as a function of the PDAF location on AF-line 5 and on the diagonal with auto-focus setting of the camera and F2.8

Figure 53. PDAF pixel shift as a function of the PDAF location on AF-line 5 and on the diagonal with auto-focus setting of the camera and F11

Figure 54. PDAF pixel shift as a function of the PDAF location on AF-line 5 and on the diagonal with focus set to “infinity” and F2.8

Figure 55. PDAF pixel shift as a function of the PDAF location on AF-line 5 and on the diagonal with focus set to “infinity” and F11

Figure 56. PDAF pixel shift as a function of the PDAF location on AF-line 5 and on the diagonal with focus set to “macro” and F2.8

Figure 57. PDAF pixel shift as a function of the PDAF location on AF-line 5 and on the diagonal with focus set to “macro” and F11

Figure 58. Pulses used to measure the PDAF pulse shifts in zones 2 to 5 on the AF-line 5, lens focus at “infinity” and F2.8

Figure 59. Incoming rays for a PDAF pair at the edge of the sensor

Figure 60. Pulses used to measure the PDAF pulse shifts in zones 2 to 5 on the AF-line 5, lens focus at “macro” and F11

Figure 61. Odd and even pulses in AF-line 5, in AF-line 5 + 1 line and in AF-line 5 + 2 lines, with green light input, F2.8 and focusing at “infinity”

Figure 62. Odd and even pulses in AF-line 5, in AF-line 5 + 1 line and in AF-line 5 + 2 lines, with green light input, F2.8 and focusing at “infinity”

Figure 63. Odd and even pulses in AF-line 5, in AF-line 5 + 1 line and in AF-line 5 + 2 lines, with green light input, F2.8 and focusing at “infinity”                                                                                     

Figure 64. PDAF pixel shift as a function of object colour and auto-focus setting of the camera, with F2.8

Figure 65. PDAF pixel shift as a function of object colour and auto-focus setting of the camera, with F11

Figure 66. PDAF pixel shift as a function of object colour, lens focusing on “infinity” and with F2.8

Figure 67. PDAF pixel shift as a function of object colour, lens focusing on “infinity” and with F11

Figure 68. PDAF pixel shift as a function of object colour, lens focusing on “macro” and with F2.8

Figure 69. PDAF pixel shift as a function of object colour, lens focusing on “macro” and with F11

 

Harvest Imaging Forum : third session

October 15th, 2015

Just an update about the registrations for the third session (Jan. 11-12, 2015) of the Harvest Imaging Forum (3D imaging with ToF) : 5 seats are left and then it is over !

If you are interested to attend, do not hesitate too long to register through the website of Harvest Imaging.

See you,

Albert, October 15, 2015.

THIRD Session Harvest Imaging Forum 2015

September 21st, 2015

Since last week the first two sessions of the Harvest Imaging Forum 2015 are sold out.  Originally the idea was NOT to have a third session.  But after many people have contacted me, I am happy to change my mind.  Our speaker David STOPPA agreed to run a 3rd session on January 11&12, 2016 at the same location.  Registration for the third session is open via the Harvest Imaging website.

Thanks for your interest, thanks for your warm reactions !

Albert, 21-9-2015.

How to Measure Anti-Blooming (3)

September 21st, 2015

After discussing the anti-blooming measurement in vertical direction, this time the measurement of the anti-blooming in horizontal direction is described.  But in principle it can be very short : exactly the same procedure is applied in the horizontal direction as explained in the previous blog.

Figure 1 shows images captured at various levels of illumination (by changing the exposure time) : top left at the onset of saturation, top right at 10 times overexposure, bottom left at 100 times overexposure and bottom right at 1000 times overexposure.

Figure 1 : Images of the test target at different exposure levels.

A simple software tool is developed to check for every exposure time, on which column the vertical black-white crossing is occurring in the images.  If the pixels are not saturated yet, the software tool simply outputs column number “500”, which actually does not exist in the left half of the image.  As soon as at a particular illumination level of the pixels in the white region reaches 75 % of saturation, the measurement tool outputs the column number at which the black-white transition occurs.  If the overexposed area reaches the side of the image, the output of the measurement tool is equal to “0”.  The result of this analysis is shown in Figure 2.

 

Figure 2 : Position of the black-white transition (indicated as column number) as a function of exposure time.

In Figure 2, from left to right, the following information is available :

  • For small exposure times (< 1 ms), the white pixels are not yet saturated, this is indicated by the column value equal to “500”,
  • For an exposure value of 1.28 ms, saturation occurs and the black-white transition is located at column number “394”,
  • From this moment onwards the large white area starts growing slowly due to all kind of optical artefacts, listed already in previous blog,
  • For exposure times larger than 120 ms, the area of the white spot grows very fast, as can be seen in the graph.  This change in “speed” is due to the blooming artefact that apparently occurs at very high exposure levels.

To calculate a number for the anti-blooming capabilities of the sensor, the same data as present in Figure 2 is shown again on a linear scale as illustrated in Figure 3.

 

Figure 3 : The same information as already illustrated in Figure 2 is shown again, but now on a linear scale.

The two important regions (saturated but no blooming and saturated with blooming) are approximated by means of a linear regression line, and as can be seen, below 120 ms exposure time, blooming plays no important role, but above 120 ms exposure time, blooming is dominating over all other artefacts.  The exposure time of 120 ms seems to be a cross-over exposure time.  (Of course this number of 120 ms is depending on the illumination level and has no further meaning.)

The anti-blooming capability is then defined as the ratio of the exposure time at which saturation is reached (texp = 1.28 ms) and the cross-over exposure time (texp = 120 ms), resulting in an anti-blooming capability of 94 times overexposure.

In conclusion : the anti-blooming capabilities in horizontal direction differ (a bit) from the capabilities in vertical direction.  This can be explained by a different boundary definition (lay-out, technology, isolation), between neighbouring pixels in horizontal compared to vertical direction.

Albert, 21-09-2015.

How to Measure Anti-Blooming (2)

August 31st, 2015

This blog will focus on the measurement of the anti-blooming capabilities of a monochrome sensor.  As known, blooming occurs when a (group of) pixel(s) is overexposed and the photodiode can no longer store all generated charges.  With an anti-blooming structure inside the pixel, the excessively generated charges can be drained, e.g. excessive electrons can escape through the reset transistor to the power supply.  But every anti-blooming structure has its limitations, and with this measurement we try to find the limits of the anti-blooming structure present in a pixel.

What is checked with this measurement is simply the size of an overexposed sensor area.  If the illumination level is increased, ideally the size of such an overexposed area should stay constant even with an overexposure.  But in reality and while the illumination level is increasing, the size of the overexposed area will grow due to several mechanisms :

  • Diffraction at the various edges of the metal lines above a pixel will “guide” photons to neighbouring pixels,
  • Multiple reflections in the multi-level layer structure above the pixels can also “guide” photons to neighbouring pixels,
  • Fresnel reflections on the sensor surface and on the lens surface can result in ghosting structures,
  • Diffraction and reflections at the edges of the iris/diaphragm present in the optical system,
  • Optical and electrical cross-talk between the pixels,
  • Light piping underneath the metal lines and/or metal shields,
  • Blooming effects after the pixels are saturated and the anti-blooming is no capable of handling the excess charges.

All these effects are proportional to the amount of light that comes to the sensor.  In the measurements, the amount of light to the sensor is modulated by changing the exposure time.  That means that all these effects can be written down with a formula that contains a linear coefficient in relation to the exposure time.  This is true for all abovementioned effects, except for the blooming.  The blooming also has a linear relationship with exposure time, but with a particular threshold.  Below a certain exposure time, the pixel will not be saturated or the anti-blooming capabilities are performing well so that no blooming occurs.  Above a certain exposure time, the blooming effect will start and will be added to all other effects that grow the overexposure sensor area.  So the measurement will explore the size of an overexposure area as a function of exposure time and will try to find the knee point at which level the blooming effect occurs.

The measurement is performed as follows (to measure the anti-blooming along columns in vertical direction) :

  • The sensor is illuminated with a target that allows a black-white transition about half-way the sensor height, and the black-white transition horizontally crosses a column in the middle of the sensor (e.g. column 380 out of 752 active columns)
  • The location of the black-white transition is monitored in the images generated by the sensor.  To do so, the balck-white transition is defined at a level of 75 % of the white part of the test target (75 % is randomly chosen, any other value can do the job as well),
  • The illumination of the target is kept constant (white fluorescent DC light), and to get different light levels on the sensor, the exposure time of the imager is changed from very small values to very large values,
  • While changing the exposure levels, the location black-white transition is constantly calculated to monitor the growth of the overexposed area.

Figure 1 shows one of the images captured at the onset of saturation, Figure 2 illustrates the situation at 10 times overexposure, Figure 3 is the result while overexposing the sensor 100 times, and finally Figure 4 illustrates a factor of 1000 times overexposure.

 

Figure 1 : Image of the test target at the moment the sensor starts to saturate [0015].

 

Figure 2 : Image of the test target at the moment the sensor is 10 times overexposed [0019].

 

Figure 3 : Image of the test target at the moment the sensor is 100 times overexposed [0029].

 

Figure 4 : Image of the test target at the moment the sensor is 1000 times overexposed [0039].

A simple software tool is developed to check for every exposure time, on which row the horizontal black-white crossing is occurring in the images.  If the pixels are not saturated yet, the software tool simply outputs row number “500”, which actually does not exist.  As soon as at a particular illumination level the pixels in the white region reach 75 % of saturation, the measurement tool outputs the row number at which the black-white transition occurs.  If the overexposed area reaches the top of the image, like shown in Figure 4, the output of the measurement tool is equal to “0”.  The result of this analysis is shown in Figure 5.

 

Figure 5 : Position of the black-white transition (indicated as row number) as a function of exposure time.

In Figure 5, from left to right, the following information is available :

  • For small exposure times (< 1 ms), the white pixels are not yet reaching 75 % of saturation, this is indicated by the row value equal to “500”,
  • For an exposure value of 1.28 ms, saturation occurs (= 75 %) and the black-white transition is located at row number “224”,
  • From this moment onwards the large white area starts growing slowly due to all kind of optical artefacts, listed already earlier in this blog,
  • For exposure times larger than 200 ms, the area of the white spot grows very fast, as can be seen in the graph.  This change in “speed” is due to the blooming artefact that apparently occurs at very high exposure levels.

To calculate a number for the anti-blooming capabilities of the sensor, the same data as present in Figure 5 is shown again on a linear scale as illustrated in Figure 6.

 

Figure 6 : The same information is shown as already illustrated in Figure 5, but now on a linear scale.

The two important regions (saturated but no blooming and saturated with blooming) are approximated by means of a linear regression line.  And as can be seen, below 174 ms exposure time, blooming plays no important role, but above 174 ms exposure time, blooming is dominating over all other artefacts.  The exposure time of 174 ms seems to be a cross-over exposure time.

The anti-blooming capability is then defined as the ratio of the exposure time at which saturation is reached (texp = 1.28 ms) and the cross-over exposure time (texp = 174 ms), resulting in an anti-blooming capability of 136 times overexposure.

In conclusion : a long story to explain a relative simple measurement.  More anti-blooming stuff to follow.

Albert, 28-08-2015.

How (not) to Measure Anti-Blooming (1)

August 10th, 2015

After several months of silence, here is a new blog about measuring image sensors.  This time the blooming and/or anti-blooming of an imager is analyzed.  Actually in this first blog about blooming, it will be shown how NOT to measure blooming.

Blooming is the effect that is showing up in the case of strong overexposure of the image sensor.  If the pixels are seen as a buckets and the photon-generated electrons are seen as the water contained in these buckets, it is clear that the maximum amount of water that can be stored in the bucket is limited.  If more light is falling on the pixels, more water needs to be stored in the buckets.  But once a bucket is completely filled, any extra water is going to spill over to the neighbouring buckets.  The last effect is being known as blooming.  Any means in the pixel to prevent blooming is called anti-blooming.

The intention of the measurement reported in this blog, is to check out the anti-blooming capabilities of an image sensor.  Ideally this can be done by overexposing a single pixel and check any blooming in the neightbouring pixels, but that is not easy to realize.  An alternative way of measuring the anti-blooming capabilities is to use a colour sensor and illuminate the device with monochrome light. If the sensor is illuminated with blue light, the green and red pixels will have a smaller light sensitivity to blue light and the blue pixel will saturate much faster than the green and red pixels.  Once the blue pixel is saturated, its anti-blooming should become active.  Without anti-blooming, the blue pixel will spill over its charge into the green pixel (direct neighbours) and red pixel (diagonal neighbour).  If spilling occurs, the sensitivity of the green and/or red pixel will increase and this can be measured by monitoring the green and red output signal.

What is explained above is realized and the result is shown in Figure 1.

 Figure 1 : Response of the different colour planes (R, G, B) of a CMOS sensor under illumination with blue light (470 nm).

For the three colour planes, the regression line of the linear response is created as well.  The ratio between B and G response is 4347/1119 = 3.9.  The ratio between B and R response is 4347/140 = 31.  Unfortunately (for the measurement), no change in response can be seen in the G or R channel once the B channel is saturated.  Conclusion the anti-blooming towards direct neighbours is at least a factor 3.9, towards diagonal neighbours is at least a factor 31.

A similar measurement can be done with red light.  The result is illustrated in Figure 2.

 Figure 2 : Response of the different colour planes (R, G, B) of a CMOS sensor under illumination with red light (630 nm).

This time, the ratio between R and G response is 1169/243 = 4.8.  The ratio between R and B response is 1169/135 = 8.7.  Unfortunately (for the measurement), no change in response can be seen in the G or B channel once the R channel is saturated.  Conclusion the anti-blooming towards direct neighbours is at least a factor 4.8, towards diagonal neighbours is at least a factor 8.7.

Finally, the sensor was illuminated with green light, and the 3 colour channels were checked as shown in Figure 3.

 Figure 3 : Response of the different colour planes (R, G, B) of a CMOS sensor under illumination with green light (525 nm).

The ratio between G and B response is 1301/365 = 3.6, the ratio between G and R response is 1301/178 = 7.3.  Also for this situation no blooming artifacts can be found.

In conclusion : the anti-blooming capabilities are at least for direct neighbours a factor of 7.3, for diagonal neighbours a factor of 31.  These numbers are relatively small, but the measurement technique applied is not capable of doing better.  The numbers reported are limited by the characterization method and not by the sensor.  So actually, what is shown in this blog is how NOT to measure the anti-blooming of a sensor, unless your device-under-test has a very poor anti-blooming performance.

Albert, 10-08-2015.

Harvest Imaging Forum 2015 : 3D Imaging with ToF

July 29th, 2015

The first session (10/11 Dec.) of the 2015 Harvest Imaging Forum is SOLD OUT.  Apparently Time-of-Flight is still a hot topic in the field.

There are still seats available for the second session (14/15 Dec.).

Albert, 29-07-2015.

Harvest Imaging Forum : 3D Imaging with ToF

July 2nd, 2015

I just want to give an update on the status of registrations :

– for the first session (10/11 Dec. 2015) 2 seats are left,

– for the second session (14/15 Dec. 2015) several seats are still available.

Some people were asking why such a hurry for a forum that will take place in 6 months from now ?  The reason has to do with the hotel reservation : to get an acceptable rate for the meeting package and for the rooms, the cancellation options offered by the hotel are very limited.  So to make sure that I can give a final GO/NO GO to the hotel without extra financial penalties, early as well as firm participant registrations are needed.  Thanks for your understanding.

Albert, 02-07-2015.

International Image Sensor Workshop 2015 : Conversion Gain Engineering

July 2nd, 2015

Several IISW2015 papers dealt with the attempt to obtain a large conversion gain to bring down the noise floor (expressed in noise equivalent electrons) of the imagers.  With a noise floor down to 0.25 electrons, a “standard” CIS could be applied in single electron detection. [Deliberately I do not call it “single photon detection” because we never have a QE of 100 %, so by definition we do not detect every photon.  The intention of the conversion gain engineering is to detect a single electron present in the PPD and/or at the FD node.]

Tohoku University demonstrated how to extract the various components of the floating diffusion capacitance, and to further lower this capacitance.  Their mainn focus was lowering the concentration of the FD junction and working without LDD at the drain side of the source follower.  A conversion gain of 243 uV/electron is reported.  [LDD’s are normally introduced to reduce the effect of hot carriers, what about the hot carriers in this structure without LDD ?]  In a second paper of the same group, the LDD-less FD structure was implemented in a real device.  To overcome the limitation of the small full well capacity with a large conversion gain, the pixel has applied the LOFIC technique in the pixel.

Dartmouth School of Engineering published their work on Multi-Bit Quanta Image Sensors by showing a measurement histogram indicating that single electron detection was realized.  The sensor used in the experiment had a conversion gain of 242 uV/electron (just 1 uV/electron lower than Tohoku Univ. !).  The paper is suggesting that a conversion gain of 1 mV/electron may be realized in the near future.  Is was not mentioned how this can be done.  But for sure very advanced CIS technologies of 65 nm or less are needed.

Also worthwhile to mention is the work of CEA, in which they use a p-type in-pixel readout structure to obtain a conversion factor of 185 uV/electron.  This is still not large enough to perform single electron detection, but is moving in the right direction.

Caeleste presented a small test array based on the pixel that was presented by the same group at ISSCC a couple of years ago.  The p-type source follower is swept between accumulation and inversion to make the 1/f noise uncorrelated between various multiple sampling moments.  Apparently there are still problems to solve in this structure, but besides that, a conversion factor around 400 uV/electron was reported for s 180 nm CIS technology.

A bit in the same direction as the papers described above, is the work reported by ON Semiconductor (former Truesense Imaging, former Kodak) describing an EM-CCD.  The overall concept is not new, but after TI and E2V, ON Semi is the next one to put EM-CCDs on the market.  With the EM concept, the primary goal is not to reach a large conversion gain, but to reach very low (equivalent) noise levels.  To continue along the EM-line, E2V published their work on EM-CMOS, fabricated in a 0.18um process.

Albert, 02-07-2015.