Archive for the ‘Uncategorized’ Category

International Image Sensor Workshop (1)

Tuesday, May 30th, 2017

Some thoughts about day 1 :

  • Sony showed that they are ready for hybrid bonding on pixel level with a pitch of 2 um (on teststructures) and 4 um in a real imager with 1 um pixel pitch,
  • A collaboration between TSMC and Qualcomm illustrated a stacked image sensor on top of an FPGA,
  • According to Omnivision, the pixel race is picking up again.  This was illustrated by an imager with a pixel pitch of 0.9 um, with the same performance as the 1.0 um pixel,
  • Fermi Lab showed an very complex die-to-wafer-to-wafer structure,
  • TSMC realized a 4T pixel in which the charge transfer (underneath the transfer gate) is no longer taking place at the interface but deeper into the silicon.  Also this was demonstrated in a device with 0.9 um pixel pitch with an improved noise performance,
  • TechInsights give a great (historical) overview of PDAF pixels and stacking.  Although they tell what others are doing (or have been done), still a lot of interesting details were shown,
  • BAE illustrated that dark current is reduced over the years by a factor of 5000, and that we now have a temperature behaviour according the Eg-law, while in the past it was the Eg/2-behaviour.  Unfortunately (or maybe fortunately), still not all dark current secrets are yet revealed,
  • TowerJazz illustrated a pinned-storage node in a global shutter pixel with 2.8 um pixel pitch (is this global shutter CIS with 2.8 um seen elsewhere in a product of … ?)
  • Fluorine implant is used to lower the noise in a CIS, this was presented by Dongbu,
  • Random Telegraph Noise got quite a bit of attention, talks from TSMC and twice Tohoku University showed a lot of measurement results to further explain and understand the RTN effect,
  • On-chip near-IR filter for colour imaging was presented by VisEra.  This is an attractive alternative to the classical near-IR filter because it makes the height of the camera-module lower.

It is impossible to write about every single paper.  On day 1 there were 17 presentations plus 45 posters, an incredible amount of details and information.  But the good news is that all papers will become on-line (open access on in about 2 or 3 months from now.


Albert, 30-05-2017.

Announcement of the fifth Harvest Imaging Forum in December 2017

Tuesday, May 16th, 2017

Mark now already your agenda for the fifth Harvest Imaging Forum, scheduled for December 2017.

After the succesful gatherings in 2013, 2014, 2015 and 2016, I am happy to announce a next one.  Also this fifth Harvest Imaging Forum will be a high-level, technical, short course focusing on one particular hot topic in the field of solid-state imaging.  The audience will be strictly limited, just to stimulate as much as possible the interaction between the participants and speaker(s).

The subject of the fifth forum will be :

Low-Noise Analog CMOS Circuit Design : from devices to circuits”.

More information about the speaker and the agenda of the forum will follow in the coming days/weeks, but I wanted to share this announcement with you as early as possible to make sure you can keep your agenda free on these days (Dec. 7-8 or Dec. 11-12, 2017).


May 16th, 2017.


Tuesday, May 2nd, 2017

The webpage for the new Harvest Imaging project, related to reproducibilityvariability and reliability of CMOS image sensors is ready !

In this Harvest Imaging project the reproducibility, the variability and the reliability of the CMOS imagers will be analyzed :

  • Reproducibility : will give quantitative information about how well particular measurements and retrieved performance data do reproduce if the devices are measured over and over again by means of the same calibrated measurement equipment,
  • Variability : will give quantitative information about the spread of the performance data from sensor to sensor/from camera to camera,
  • Reliability : will give quantitative information about the stability of the sensor and camera performance over time.

The measurements are done on a higher-end, more expensive camera with a global shutter CMOS sensor and on a lower-end, cheaper camera with a rolling shutter CMOS sensor. The cameras will be thoroughly measured every 6 months over a period of 5 years.  The yearly reports about the measurement results will become available in the Summer of each calendar year (Summer ’17, Summer ’18, Summer ’19, Summer ’20 and Summer ’21). A customer can step into this project at any given time, but the earlier the more attractive the pricing of the report(s) will be.  Once a customer has stepped into the project, he/she will automatically receive all reports that are produced AFTER the date he/she stepped in.

For more information, please check out :

Albert, 2/5/2017.


Thursday, April 13th, 2017

After the successful Harvest Imaging project on Phase-Detection Auto-Focus pixels (or PDAF), a new project is started that again will generate technical data about CMOS image sensors.  This time the focus will be put on the reproducibilityvariability and reliability of the sensor’s performance characteristics.  This kind of information has never been published before.  None of the CMOS image sensor vendors is supplying numerical information about reproducibilityvariability and reliability of their devices.  So if the vendors do not supply this data, only measurements on existing products can reveal the “secrets”.

Within a couple of weeks from now  more information on the tests performed as well as information on the parameters characterized will become available on this website of Harvest Imaging (

So stay tuned !!!

Albert, 13-04-2017.

RTS is not always noise related !!

Sunday, March 5th, 2017

Recently we went for a ski holiday.  Because I do not have my own skis, I had to rent then, and below you can see what I got …… : RTS !!!!!



Albert 05-03-2017.

ISSCC 2017 (4)

Friday, February 10th, 2017

“A 0.44 e rms read-noise 32fps 0.5 Mpixel high-sensitivity RG-less-pixel CMOS image sensor using bootstrapping reset” from Shizuoka University was presented by T. Wang.  The device is using correlated multiple sampling (CMS) on column level in combination with a high conversion gain.  The latter is obtained by reset gate-less pixel and a bootstrapping technique.  The final result is a conversion gain of over 150 uV/electron.  The reset-gate less pixel is not really new, this is already published by the same group at other conferences.  By means of carefully designing the distance between the floating diffusion and the reset drain diode, the reset-gate less device can be operated.  But in this paper the extra bootstrapping technique is added to allow a larger voltage swing of the pixel.  Pictures of a scene illuminated at 0.1 lux were shown (after averaging 16 images !).  Pixel size is 11.2 um, with a full well of 4100 electrons.  The read noise is as low as 0.44 electrons rms.  Despite of the low full well, still a dynamic range of 72.3 dB is mentioned.

The last paper in the imaging session was entitled “A 1ms high-speed vision chip with 3D stacked 140GOPS column-parallel PEs for spatio-temporal image processing” by T. Yamazaki of Sony.  The device is really fully exploiting the capabilities of the 3D stacking.  In the second layer of silicon a memory is included next to column level processing elements and the column level ADC.  In this bottom silicon layer, filtering of the data can be done, as well as target detection, target tracking and feature extraction.  The speed at which all operations are done is simply phenomenal.  The imaging part is made in a 90 nm 1P4M process, the bottom part is made in a 40 nm 1P7M process.  Pixel size is 3.5 um, full well is 19,800 electrons, random noise is 2.1 electrons, resulting in 80 dB dynamic range at 12 bits.  As mentioned in the title, the processing in the spatio-temporal domain can be done at a speed of 1 ms.

Albert, 10-2-2017.

ISSCC 2017 (3)

Thursday, February 9th, 2017

Tsutomu Haruta of Sony presented “A ½.3 inch 20 Mpixel 3-layer stacked CMOS image sensor with DRAM”.  In just a few words : the sensor is composed out of 3 layers : top layer contains the photon conversion part (BSI), the middle layer contains a DRAM and the bottom layer contains the processing part.  The first time that a stacked imager with 3 layers is shown.  The mutual connections between the various levels of silicon are realized by TSVs.  The image part can be readout very fast, much faster than the interface with the external world can handle.  So the DRAM is used as an intermediate frame buffer : fast readout of the imaging part and data stored in the DRAM, next a slow readout of the DRAM to accommodate the slow interface of the total system.  The pixels are arranged in a 2 x 4 shared pixel concept, with 8 column readout lines for two groups of 2 x 4 pixels.  4 rows of column level ADCs are included to allow the fast readout of the focal plane.  Remarkable is the fact that the data generated in the top layer has to be transported in the analog domain to the lowest level where the ADC is located.  Next the digital data is stored into the middle layer, being the DRAM.  It was not mentioned during the presentation, neither during Q&A, why the DRAM is located between the top and bottom layers.

With this particular architecture of the system, one can readout the sensor part extremely fast into the DRAM and one can readout the DRAM relatively slowly towards the outside world.  In this way artefacts of the rolling shutter are limited.  Once the data available in the DRAM, it is also possible to work in different formats, even in parallel with each other : full resolution, or limited resolution as a kind of digital zoom.  Another very nice feature of the sensor is its binning capability : by combining a binning on the floating diffusion with the binning in the voltage domain, the resolution of the imager can be drastically reduced.  If this reduced resolution image is then sampled at a high speed, stored in DRAM and retrieved at a lower speed, an “on-chip” slow-motion is created.  In the binned lower-resolution mode, it is possible to store 63 frames in the DRAM, captured at a speed of 960 fps.  Demonstrations of this feature during and after the presentation were showed.  Great images !

Some numbers : in total 17 layers of interconnect are used in the 3-layered stacked imager : 6M for the CIS (90 nm), 4M for the DRAM (30 nm) and 7M for the logic (40 nm).  The imager has 21 Mpixels, 1.22 um pixel pitch, DRAM has 1 G bit, and the interface is MIPI based.

Shiníchi Machida of Panasonic presented a paper entitled : “A 2.1 Mpixel organic-film stacked RGB-IR image sensor with electrically controllable IR sensitivity”.  Panasonic presented already a couple of papers with organic films on last year’s ISSCC.  But in this new presentation, 2 organic films are stacked on top of each other : the top one is sensitive to IR light, the bottom one is sensitive to RGB.  Both layers need a particular voltage across them to become light sensitive, and this light sensitivity has a particular step function.  Below a kind of threshold voltage the organic film is not light sensitive and this threshold voltage differs between the RGB (low threshold) and the IR (high threshold).  So if a large voltage is applied across the sandwich of the two organic films, both become light sensitive, if a lower voltage is applied across the sandwich only the RGB-film is becoming light sensitive.  In this way the light sensitivity of the IR-film can be switched on and off while the RGB-film is still active.  (Although the sensitivity of the RGB-film drops to about 50 % if the IR-film is switched off).  Overall an interesting feature that other imagers with classical pixels cannot shown.  Unfortunately (just like last year) no information was given about noise, neither about dark performance, otherwise a good presentation.

Albert, 9-2-2017.

ISSCC 2017 (2)

Wednesday, February 8th, 2017

Wootaek Lim of University of Michigan talked about “A sub-nW 80mlx-to-1.26Mlx self-referencing light-to-digital converter with AlGaAs photodiode”.  The work is focusing on a wearable image sensor for instance to acquire a measurement of the cumulative light exposure a person gets over a long period of time (e.g. UV radiation exposure).  Crucial parameters for this application are low power consumption, wide dynamic range and low relative error.  These requirements are realized by using a special ring oscillator and counter as an integrating ADC, use a photodiode voltage as the input in combination with a divider to extend the measurable voltage range, and linearly coding the light intensity in the log-log domain.  All various techniques were explained in detail including circuit diagrams.  As a result, with these news techniques, the power was reduced over 1000 x, the dynamic range was extended up from 1.26 Mlx (starting from 80mlx), all combined with the lowest conversion energy of 4.13 nJ/conv. at 50klx.  The sensor is fully functional between -20 and +85 deg.C.


“A 1.8 e temporal noise over 110dB dynamic range 3.4 um pixel pitch global shutter CMOS image sensor with dual-gain amplifiers, SS-ADC and multiple accumulation shutter” by Masahiro Kobayashi of Canon.  This was a great paper with a great presentation of the obtained results, but I did have serious doubts about the novelty of the work (and I was not the only one).  What is done is the implementation of a global shutter with a storage node in the charge domain.  This results in the so-called 6T transistor architecture.  To increase the fillfactor of the pixels, 2-by-1 sharing is applied.  In a classical GS pixel, the charge needs to be stored on the PPD, on the SG and on the FD.  If they are all equal to each other in capacitive value, a particular full well is obtained which is pretty limited.  The idea now is to make the PPD smaller and the SG larger.  In that case the full well would be determined by the small PPD, but during the exposure the PPD can be emptied multiple times and then the weakest link in the chain is shifted to the larger SG.  This is not new, Canon themselves introduced this already at IEDM 2016, but also Aptina published a similar solution at the IISW in 2009.  Nevertheless, besides this general idea, the presented sensor has a funnel-shaped light guide structure above the pixels, an optimized light shield to keep the PLS low.  To enhance the dynamic range of the sensor, the columns are provided with a gain stage that automatically choses between a gain of 1x or 4x.  With some clever timing of the transfer of the PPD and with an increased readout speed of the sensor, extra new option can be added, such as wider dynamic range and in-pixel coded exposure.

Results and images were shown during the presentation, despite of the fact that not everything is/was new, the results were impressive.  5 Mpixels, up to 120 fps, 450 mW, pixel pitch 3,4 um, 130 nm 1P4M +LS process, 1.8 e noise floor, maximum 79 dB dynamic range and in the HDR mode 111 dB, 20 e/s dark current at 60 deg.C.

Albert, 8-2-2017.

ISSCC 2017 (1)

Tuesday, February 7th, 2017

Bongki Son of Samsung presented a paper “A 640 x 480 dynamic vision sensor with 9um pixel and 300MEPS address-event representation”.  This work reminds me very much of the research of Tobi Delbruck and of the projects of Chronocam.  A sensor is developed that does not generate standard images but only indicates in which pixels there is a change from frame to frame.  The pixel that is used in this application is pretty complex with more than 10 transistors and at least two caps per pixel.  The results shown at the end of the presentation were quite impressive of what can be achieved by such a device.

InSilixa presented a paper “A fully integrated CMOS fluorescence biochip for multiplex polymerase chain reaction (PCR) processes”.  This disposable CMOS biochip allows DNA analysis with a flow-through fluidic system.  The chip has 32 x 32 DNA biosensors included on the chip.  Next to the photosensitive part in every pixel, quite some circuitry is included as well.  Even a heater (fabricated in metal 4) is part of every pixel.  Another critical feature of the design is the on-chip interference filter that needs to block the excitation light (around 500 nm), but needs to allow passing the low-light level fluorescence light that needs to be detected (around 590 nm).

Min-Woong Seo of Shizuoka University presented “A programmable sub-nanosecond time-gated 4-tap lock-in pixel CMOS image sensor for real time fluorescence lifetime imaging microscopy”.  Also in this case the pixel is pretty large and does contain a lot of extra electronics next to the light sensitive area.  The modulation pixel has 4 taps which are addressed every 0.9 ns (= very fast !).  The pixel looks very much the same as a CMOS 4T pixel with a charge storage node for global shuttering.  But in this case the pixel has 4 charge nodes to store information.  It is not the first time that Shizuoka University is publishing pixels for ToF applications, and I am always very much intrigued by their device simulations (they use the same tools as Delft University of Technology is using).  It is indeed amazing to see how narrow channel effects are being used in this pixel to speed up the device.

Albert, 7-2-2017.

Good Bye 2016 ! 

Friday, December 23rd, 2016

Again another year (almost) has passed.  I know it sounds a bit silly, but time is flying by, and I do have the impression that everything is moving faster than ever before.

2016 started with a great special issue of IEEE Transactions on Solid-State Imaging in January.  I had the honour of being the guest-editor-at-large for this special issue.  (What does the title of guest-editor-at-large mean ?  A lot of work !).  But I am a big fan of IEEE-ED and IEEE-JSSC, because these journals are great sources of information from and for our community.  So I was really pleased with the invitation of IEEE to serve as the guest-editor-at-large and I am happy that I could cooperate with my soul-mates in imaging.

In 2015 Harvest Imaging came with a new product on the market : a reverse engineering report of a particular imaging feature present in a commercial camera.  The first reverse engineering report was devoted to Phase Detection Auto-Focus Pixels.  And in the meantime, in 2016 I started with a new project.  Because the new project is still in the preparation phase, it is difficult to disclose the topic, but it will be based on tons and tons of measurements.  Recently I bought an EMVA1288 test equipment and I do hope to get started with it sometime after New Year.

The Harvest Imaging Forum 2016 was targeting “Robustness of CMOS Technology and Circuitry”.  I do have to admit that the interest in the 2016 Forum was less than in the 2015 Forum.  Something I do not immediately understand, because the robustness of CMOS is a topic that should be of interest to our imaging community as well.  The main objective of the Harvest Imaging Forum is to touch topics that are somewhat out of my own core expertise, but still important subjects for solid-state imaging.  (For subjects that belong to my own expertise, I do not have to hire external instructors of course.)  Nevertheless, Harvest Imaging will continue with the Forum, also in 2017.  I do have a topic and a speaker in mind, but the speaker himself does not know yet.  More info will follow  in the Spring 2017 I guess.

Although (or maybe just because ?) we did not had a new IISW in 2016 (the next one will be in 2017), 2 new conferences were launched in Europe : the AutoSens and the MediSens.  I attended both, also because both of them are organized by a good friend of mine, Robert Stead and his crew.  I was happy to see that new applications were introduced by young engineers that are working in the solid-state imaging field.  I am pretty sure that the next generation will be capable of continuing to grow the solid-state imaging business.  Imaging was never ever that big and appealing as it is today, and I am pretty sure that in the future imaging can and will become only bigger.

Welcome 2017 !  Looking forward to another great imaging year, with the IISW in Japan !

Wishing all my readers a Merry Christmas and a Happy New Year.  “See” you soon.

Albert, 23-12-2016.