ISSCC 2017 (3)

February 9th, 2017

Tsutomu Haruta of Sony presented “A ½.3 inch 20 Mpixel 3-layer stacked CMOS image sensor with DRAM”.  In just a few words : the sensor is composed out of 3 layers : top layer contains the photon conversion part (BSI), the middle layer contains a DRAM and the bottom layer contains the processing part.  The first time that a stacked imager with 3 layers is shown.  The mutual connections between the various levels of silicon are realized by TSVs.  The image part can be readout very fast, much faster than the interface with the external world can handle.  So the DRAM is used as an intermediate frame buffer : fast readout of the imaging part and data stored in the DRAM, next a slow readout of the DRAM to accommodate the slow interface of the total system.  The pixels are arranged in a 2 x 4 shared pixel concept, with 8 column readout lines for two groups of 2 x 4 pixels.  4 rows of column level ADCs are included to allow the fast readout of the focal plane.  Remarkable is the fact that the data generated in the top layer has to be transported in the analog domain to the lowest level where the ADC is located.  Next the digital data is stored into the middle layer, being the DRAM.  It was not mentioned during the presentation, neither during Q&A, why the DRAM is located between the top and bottom layers.

With this particular architecture of the system, one can readout the sensor part extremely fast into the DRAM and one can readout the DRAM relatively slowly towards the outside world.  In this way artefacts of the rolling shutter are limited.  Once the data available in the DRAM, it is also possible to work in different formats, even in parallel with each other : full resolution, or limited resolution as a kind of digital zoom.  Another very nice feature of the sensor is its binning capability : by combining a binning on the floating diffusion with the binning in the voltage domain, the resolution of the imager can be drastically reduced.  If this reduced resolution image is then sampled at a high speed, stored in DRAM and retrieved at a lower speed, an “on-chip” slow-motion is created.  In the binned lower-resolution mode, it is possible to store 63 frames in the DRAM, captured at a speed of 960 fps.  Demonstrations of this feature during and after the presentation were showed.  Great images !

Some numbers : in total 17 layers of interconnect are used in the 3-layered stacked imager : 6M for the CIS (90 nm), 4M for the DRAM (30 nm) and 7M for the logic (40 nm).  The imager has 21 Mpixels, 1.22 um pixel pitch, DRAM has 1 G bit, and the interface is MIPI based.

Shiníchi Machida of Panasonic presented a paper entitled : “A 2.1 Mpixel organic-film stacked RGB-IR image sensor with electrically controllable IR sensitivity”.  Panasonic presented already a couple of papers with organic films on last year’s ISSCC.  But in this new presentation, 2 organic films are stacked on top of each other : the top one is sensitive to IR light, the bottom one is sensitive to RGB.  Both layers need a particular voltage across them to become light sensitive, and this light sensitivity has a particular step function.  Below a kind of threshold voltage the organic film is not light sensitive and this threshold voltage differs between the RGB (low threshold) and the IR (high threshold).  So if a large voltage is applied across the sandwich of the two organic films, both become light sensitive, if a lower voltage is applied across the sandwich only the RGB-film is becoming light sensitive.  In this way the light sensitivity of the IR-film can be switched on and off while the RGB-film is still active.  (Although the sensitivity of the RGB-film drops to about 50 % if the IR-film is switched off).  Overall an interesting feature that other imagers with classical pixels cannot shown.  Unfortunately (just like last year) no information was given about noise, neither about dark performance, otherwise a good presentation.

Albert, 9-2-2017.

ISSCC 2017 (2)

February 8th, 2017

Wootaek Lim of University of Michigan talked about “A sub-nW 80mlx-to-1.26Mlx self-referencing light-to-digital converter with AlGaAs photodiode”.  The work is focusing on a wearable image sensor for instance to acquire a measurement of the cumulative light exposure a person gets over a long period of time (e.g. UV radiation exposure).  Crucial parameters for this application are low power consumption, wide dynamic range and low relative error.  These requirements are realized by using a special ring oscillator and counter as an integrating ADC, use a photodiode voltage as the input in combination with a divider to extend the measurable voltage range, and linearly coding the light intensity in the log-log domain.  All various techniques were explained in detail including circuit diagrams.  As a result, with these news techniques, the power was reduced over 1000 x, the dynamic range was extended up from 1.26 Mlx (starting from 80mlx), all combined with the lowest conversion energy of 4.13 nJ/conv. at 50klx.  The sensor is fully functional between -20 and +85 deg.C.


“A 1.8 e temporal noise over 110dB dynamic range 3.4 um pixel pitch global shutter CMOS image sensor with dual-gain amplifiers, SS-ADC and multiple accumulation shutter” by Masahiro Kobayashi of Canon.  This was a great paper with a great presentation of the obtained results, but I did have serious doubts about the novelty of the work (and I was not the only one).  What is done is the implementation of a global shutter with a storage node in the charge domain.  This results in the so-called 6T transistor architecture.  To increase the fillfactor of the pixels, 2-by-1 sharing is applied.  In a classical GS pixel, the charge needs to be stored on the PPD, on the SG and on the FD.  If they are all equal to each other in capacitive value, a particular full well is obtained which is pretty limited.  The idea now is to make the PPD smaller and the SG larger.  In that case the full well would be determined by the small PPD, but during the exposure the PPD can be emptied multiple times and then the weakest link in the chain is shifted to the larger SG.  This is not new, Canon themselves introduced this already at IEDM 2016, but also Aptina published a similar solution at the IISW in 2009.  Nevertheless, besides this general idea, the presented sensor has a funnel-shaped light guide structure above the pixels, an optimized light shield to keep the PLS low.  To enhance the dynamic range of the sensor, the columns are provided with a gain stage that automatically choses between a gain of 1x or 4x.  With some clever timing of the transfer of the PPD and with an increased readout speed of the sensor, extra new option can be added, such as wider dynamic range and in-pixel coded exposure.

Results and images were shown during the presentation, despite of the fact that not everything is/was new, the results were impressive.  5 Mpixels, up to 120 fps, 450 mW, pixel pitch 3,4 um, 130 nm 1P4M +LS process, 1.8 e noise floor, maximum 79 dB dynamic range and in the HDR mode 111 dB, 20 e/s dark current at 60 deg.C.

Albert, 8-2-2017.

ISSCC 2017 (1)

February 7th, 2017

Bongki Son of Samsung presented a paper “A 640 x 480 dynamic vision sensor with 9um pixel and 300MEPS address-event representation”.  This work reminds me very much of the research of Tobi Delbruck and of the projects of Chronocam.  A sensor is developed that does not generate standard images but only indicates in which pixels there is a change from frame to frame.  The pixel that is used in this application is pretty complex with more than 10 transistors and at least two caps per pixel.  The results shown at the end of the presentation were quite impressive of what can be achieved by such a device.

InSilixa presented a paper “A fully integrated CMOS fluorescence biochip for multiplex polymerase chain reaction (PCR) processes”.  This disposable CMOS biochip allows DNA analysis with a flow-through fluidic system.  The chip has 32 x 32 DNA biosensors included on the chip.  Next to the photosensitive part in every pixel, quite some circuitry is included as well.  Even a heater (fabricated in metal 4) is part of every pixel.  Another critical feature of the design is the on-chip interference filter that needs to block the excitation light (around 500 nm), but needs to allow passing the low-light level fluorescence light that needs to be detected (around 590 nm).

Min-Woong Seo of Shizuoka University presented “A programmable sub-nanosecond time-gated 4-tap lock-in pixel CMOS image sensor for real time fluorescence lifetime imaging microscopy”.  Also in this case the pixel is pretty large and does contain a lot of extra electronics next to the light sensitive area.  The modulation pixel has 4 taps which are addressed every 0.9 ns (= very fast !).  The pixel looks very much the same as a CMOS 4T pixel with a charge storage node for global shuttering.  But in this case the pixel has 4 charge nodes to store information.  It is not the first time that Shizuoka University is publishing pixels for ToF applications, and I am always very much intrigued by their device simulations (they use the same tools as Delft University of Technology is using).  It is indeed amazing to see how narrow channel effects are being used in this pixel to speed up the device.

Albert, 7-2-2017.

Good Bye 2016 ! 

December 23rd, 2016

Again another year (almost) has passed.  I know it sounds a bit silly, but time is flying by, and I do have the impression that everything is moving faster than ever before.

2016 started with a great special issue of IEEE Transactions on Solid-State Imaging in January.  I had the honour of being the guest-editor-at-large for this special issue.  (What does the title of guest-editor-at-large mean ?  A lot of work !).  But I am a big fan of IEEE-ED and IEEE-JSSC, because these journals are great sources of information from and for our community.  So I was really pleased with the invitation of IEEE to serve as the guest-editor-at-large and I am happy that I could cooperate with my soul-mates in imaging.

In 2015 Harvest Imaging came with a new product on the market : a reverse engineering report of a particular imaging feature present in a commercial camera.  The first reverse engineering report was devoted to Phase Detection Auto-Focus Pixels.  And in the meantime, in 2016 I started with a new project.  Because the new project is still in the preparation phase, it is difficult to disclose the topic, but it will be based on tons and tons of measurements.  Recently I bought an EMVA1288 test equipment and I do hope to get started with it sometime after New Year.

The Harvest Imaging Forum 2016 was targeting “Robustness of CMOS Technology and Circuitry”.  I do have to admit that the interest in the 2016 Forum was less than in the 2015 Forum.  Something I do not immediately understand, because the robustness of CMOS is a topic that should be of interest to our imaging community as well.  The main objective of the Harvest Imaging Forum is to touch topics that are somewhat out of my own core expertise, but still important subjects for solid-state imaging.  (For subjects that belong to my own expertise, I do not have to hire external instructors of course.)  Nevertheless, Harvest Imaging will continue with the Forum, also in 2017.  I do have a topic and a speaker in mind, but the speaker himself does not know yet.  More info will follow  in the Spring 2017 I guess.

Although (or maybe just because ?) we did not had a new IISW in 2016 (the next one will be in 2017), 2 new conferences were launched in Europe : the AutoSens and the MediSens.  I attended both, also because both of them are organized by a good friend of mine, Robert Stead and his crew.  I was happy to see that new applications were introduced by young engineers that are working in the solid-state imaging field.  I am pretty sure that the next generation will be capable of continuing to grow the solid-state imaging business.  Imaging was never ever that big and appealing as it is today, and I am pretty sure that in the future imaging can and will become only bigger.

Welcome 2017 !  Looking forward to another great imaging year, with the IISW in Japan !

Wishing all my readers a Merry Christmas and a Happy New Year.  “See” you soon.

Albert, 23-12-2016.


Signal-to-Noise Ratio (SNR)

December 16th, 2016

The Signal-to-Noise Ratio quantifies the performance of a sensor in response to a particular exposure.  It quantifies the ratio of the sensor’s output signal versus the noise present in the output signal, and can be expressed as :

SNR = 20·log(Sout/?out)

With :

  • SNR : signal-to-noise ratio [dB],
  • Sout : output signal of the sensor [DN, V, e],
  • ?out : noise present in the output signal [DN,V, e].

Notice that :

  • the output signal and the noise level need to be expressed in the same way : in digital numbers (DN), in Volts (V) or in number of electrons (e),
  • the specification of the SNR only makes sense if also the input signal is clearly specified. Without input signal, there is not output signal,
  • the noise is the total temporal noise of all parts, included in the pixel itself as well as the readout chain of the pixel. For some applications the photon shot noise is included in ?out as well, for others it is not (see further).

A few important remarks w.r.t. signal-to-noise ratio :

  • the signal-to-noise ratio specified for an imager is a single number that is valid for all pixels. Because the pixels are analog in nature, they all differ (a little bit) from each other.  About 50 % of the pixels will have a lower signal-to-noise ratio than the specified value and about 50 % of the pixels will have a higher signal-to-noise ratio than the specified value,
  • that single number does not have any information about the dominant noise source, neither about the column noise, row noise and/or pixel noise,
  • the fixed-pattern noise is not included in the definition of SNR. The argumentation very often heard is that fixed-pattern noise can be easily corrected, but any correction or cancellation of fixed-pattern noise may increase the level of the temporal noise and will reduce the signal-to-noise ratio,
  • in the case the sensor is used for video applications, very often the photon shot noise is omitted in the total noise ?out, and actually the SNR listed in the data sheets is much higher than what the reality will bring. If the sensor is used for still applications, mostly the photon shot noise is included in the total noise ?out,
  • in a photon-shot noise limited operation of the sensor, the noise ?out is by definition equal to the photon shot noise, and the maximum SNR that can be delivered by the sensor will be :

SNRmax = 20·log(Ssat/?Ssat) = 20·log(?Ssat) = 10·log(Ssat)

With :

  • SNRmax : maximum signal-to-noise ratio [dB],
  • Ssat : saturation output signal of the sensor [e],
  • the various noise sources present in a sensor do (strongly) depend on temperature, so will the SNR. There is not a single noise source that is becoming better (= lower noise) at higher temperatures.  But in most data sheets the SNR is specified at room temperature.  Be aware that sensors that are not cooled or temperature stabilized, will run at a higher temperature than room temperature due to the self-heating of the sensor in the camera.  This effect will automatically reduce the SNR below the specified numbers in the data sheet.


In conclusion : the SNR specified and its value found in data sheets can never be reached in real imaging situations by all pixels because it is an average number, the fixed-pattern noise is not taken into account, the self-heating of the sensor lowers the SNR, and moreover, in many video applications the photon shot noise is omitted.


Albert, 16-12-2016.


December 2nd, 2016

The Dynamic Range (DR) of an imager gives an indication of the imager’s ability to resolve details in dark areas as well as details in light areas in the same image.  It indicates what is the largest signal that can be detected versus the smallest signal that can be detected.

Mathematically it is defined as :

DR = 20·log(Ssat/?read)

with :

  • DR : dynamic range [dB],
  • Ssat : saturation signal of the sensor [DN or V or e],
  • ?read : noise in dark [DN or V or e].

Notice that :

  • the saturation level and the noise in dark need to be expressed in the same way : in digital numbers (DN), in Volts (V) or in number of electrons (e),
  • the noise in dark is the total temporal noise contribution of all electronic parts that are included in the readout chain, starting in the pixel.

A few important remarks w.r.t. dynamic range :

  • the dynamic range specified for an imager mostly is a single number that should be valid for all pixels. Because the pixels are analog in nature, they all differ (a little bit) from each other, and in principle about 50 % of the pixels will have a lower dynamic range than the specified value and about 50 % of the pixels will have a higher dynamic range than the specified value,
  • the noise in dark does not contain any noise related to the exposure of the imager, for instance dark current shot noise. So in reality the noise present in the output signal will always be higher than the one used in the calculation of the dynamic range because of the presence of dark-current shot noise.  Moreover, dark-current related noise source are strongly dependent on the integration time,
  • in normal operation of an imager, its so-called junction-temperature of the sensor is always higher than room temperature at which the dynamic range is specified. Temperature has a serious impact on the noise level and consequently on the dynamic range,
  • fixed-pattern noise is not included in the definition of DR. The argumentation is that fixed-pattern noise can be easily cancelled, but any correction or cancellation of fixed-pattern noise may increase the level of the temporal noise and will reduce the dynamic range again.

In conclusion : the DR specified and its value found in data sheets is a number that only has a theoretical value, it can never be reached by all pixels in real imaging situations, because it is an average number for all pixels.  Very often it can even not be reached by any of the pixels, because it does not take into account any exposure, neither any temperature effects or fixed-pattern noise issues.

Albert, 02-12-2016.

Harvest Imaging Forum 2016 : Update

October 25th, 2016

The Harvest Imaging Forum 2016 is almost sold out : there are only 4 seats left in the session of December 8th and 9th 2016.  In contradiction to the Forum 2015, there will be no extra session organized in coming January !  First come, first served !

Albert, 25-10-2016.

Training Last Week in Dresden (Germany)

October 17th, 2016

Last week I taught the 5-days class “Digital Imaging” organized by CEI.  This time the training took place in Dresden (Germany).  For the very first time we added a company visit to the training (after teaching hours).  We had the luxury to be invited by Aspect Systems to visit their premises in Dresden.  After the course on day3, taxis were organized by CEI to bring the participants to Aspect Systems.  We were welcomed by Marcus Verhoeven, one of the company’s founders.  Marcus explained and showed us the activities of Aspect Systems.  In the imaging community, Aspect Systems is known for their test services, but actually they do much more than that.  Aspect Systems is developing hardware, software, algorithms, optics, mechanics for imaging applications, independent of the final application of the systems (can be testing, evaluation, or other purposes).

We limited the visit to 1 hour, not to overload the course participants too much with technical information after already 3 days of training.  But afterwards that limitation seemed to be a mistake.  Everybody was so enthousiast about the visit and the contact with the real imaging world, that the only complain we got was that the length of the visit : too short.

With this blog, I want to thank Marcus Verhoeven and his co-workers for their time and hospitality to have us at their premises.  Hopefully we can repeat this company-visit experiment when we are again in Dresden for another training.  But then for sure, we spend some more time at Aspect Systems.

Thanks Marcus and team, success with your business !

Albert, 17-10-2016.

Harvest Imaging Forum 2016

October 13th, 2016

For those of you who are still interested in the Harvest Imaging Forum 2016 : there are only 6 seats left in the session of December 8th and 9th 2016, before the forum is sold out.  In contradiction to the Forum 2015, there will be no extra session organized in coming January !  First come, first served !

Albert, 13-10-2016.

AutoSens 2016 in Brussels

September 22nd, 2016

Yesterday morning (Sep. 21st, 2016) I attended a few sessions of AutoSens 2016 in Brussels.  It is a new conference organized by the people who started and grew Image Sensors Auto at the time they were still working for Smithers.  AutoSens was very well attended and was very well organized in a great setting, namely in the Auto-Museum in Brussels.  Excellent choice !

In the morning sessions, there were 3 papers related to image sensors.  Pierre Cambou (YOLE Development) talked about new developments in mobile and their spin-off to automotive applications, Daniel van Nieuwenhove (Softkinetic) gave a nice overview of 3D imaging technologies and how the Softkinetic’s solutions fit into this landscape, and Tarek Lule’s (ST) presentation was about a HDR flicker-free CMOS image sensor.  The latter one had the most technical information, although Tarek did not give any details about the pixel architecture.  But what I understood from his talk is the following : the pixels are making use of multiple photodiodes :

  • a large photodiode is capturing information in an continuous mode during the exposure time, with a high sensitivity and is basically “looking” after details in the darkest parts of the image,
  • a small photodiode that is capturing information in a chopped mode : during the exposure time the photodiode is active during short periods of time and is inactive during the remaining time of the chopping period, it sums the signals obtained during these short periods of time.  In this way, information is collected AT THE SAME TIME as the large photodiode, but because of the size and of the chopping, the diode “looks” after details in the mid-range of the image, motion artefacts can be avoided,
  • a small photodiode that is capturing information in a chopped mode : during the exposure time the photodiode is active during VERY short periods of time and is inactive during most of the time of the chopping period, and it sums the signals obtained during these very short periods of time.  In this way, information is collected AT THE SAME TIME as the large photodiode, but because of the size and the chopping, the diode “looks” after details in the high-range of the image, motion artefacts can be avoided.  The chopping frequencies for the two smaller diodes is the same, the only difference is the duty cycle of active and non-active.

Apparently the pixel needs three photodiodes, but because of the chopping, the work that needs to be done by the two smaller photodiodes can be done by a single one in combination with an appropriate time-multiplexing between the short and very short active times within the chopping period.  So, the pixel is based on two photodiodes in combination with a few storage nodes.  More information was not revealed …

Conclusion : clever idea to make a flicker-free imager with a high-dynamic range (quoted 145 dB).  Not many performance numbers were given, but the overall working of the device was shown by means of a video.  Looking forward to learn more about this device !


Albert, 22-09-2016.