Archive for the ‘Uncategorized’ Category

ISSCC 2019 (1)

Tuesday, February 19th, 2019

SmartSens presented a paper entitled : A stacked global-shutter CMOS with SC-type hybrid-GS pixel and self-knee point calibration single-frame HDR and on-chip binarization algorithm for smart vision applications.  This was a paper describing an image sensor in which several already known ideas are combined.  The pixel of the imager is more or less the same as the one that is used by CMOSIS (now ams) : a global shutter pixel with the storage node in the voltage domain.  Actually two storage nodes to sample reset and signal values to allow CDS.  Where the CMOSIS pixel has an in-pixel current source for the first follower, the SmartSens pixel has instead a row-select switch to allow the pixel to run in a rolling mode without the extra sampling in the pixel for the global shutter mode.  So you can run the pixel in rolling or in global shutter mode (that refers to the word hybrid in the title of the paper).

HDR is obtained by biasing the TX gate at two levels during the exposure time.  In the first part of the exposure time TX gets an intermediate value which limits the full well of the PPD, and in the second part of the exposure time TX gets a low value to increase the full well of the PPD.  Also this is a known technique, and it is also known that the creation of a knee point in the output characteristic will create great fixed-pattern noise issues.  But in this paper, a calibration is done (on-chip) to cancel out the FPN.  And this is an interesting method : by an appropriate clocking of the reset drain, reset gate and transfer gate, the pinned photodiode is completely filled with charges to saturation, and next the pixel is readout to measure the saturation level.  All this is done on-chip.  The method of filling the PPD through the reset and transfer transistor is neither new (developed by TU Delft), but to use this method for the on-chip calibration is new.

The readout chain is based on column parallel 13-bit counting ADCs with digital CDS.

The stacking technology (45 nm/65 nm, TSMC) has several interesting advantages, such as the use of the MIM caps for the in-chip sample-and-holding.  In this way the caps can be made larger (= lower kTC) and the presence of the caps has no influence on the fill factor.  Another advantage is the quantum efficiency, reported is 95 % in green and 36 % at 940 nm.  Nothing is mentioned about MTF, the high QE at 940 nm suggests that a thick epi layer is used, and that is not always beneficial for MTF.

Some more numbers : pixel size 2.7 um, 110 dB dynamic range (with HDR), PRNU is 0.6 %, full well is 10,000 electrons and the random noise is 3.5 electrons.  Shutter efficiency 20,000:1.

University of Michigan presented “Energy-efficient low-noise CMOS image sensor with capacitor array-assisted charge-injection SAR ADC for motion-triggered low-power IoT applications”.  Quite some time of the presentation was spent on the working principle of the ADC, apparently the ADC concept was already presented at ISSCC2016.  In a few words : the large capacitor array needed in a classical SAR is replaced by a current/charge injector which is controlled by a digital switch.

Motion detection in a sensor can be done in the pixel (requires a pixel modification with reduced image quality), outside the array (requires extra memory) or in the column (called near pixel).  The latter concept is used in this presentation.  Only a minor extra hardware in column is added to allow the motion detection capability of the imager.

Pixel size is 1.5 um, 792 x 528 pixels, 65 nm 1P3M technology of TPSCo.  Interesting to see was the power breakdown : energy/frame/pix = 63.6 pJ (ADC + pixel).

Albert, 19-02-19

Goodbye 2018 !

Friday, December 21st, 2018

Again another year (almost) has passed by.  And then it is a good moment to take a look back of what happened in 2018.  And yes, again a busy year and again a very interesting year if one looks to technical innovations and achievements !

2018 was the first full year in which every month the Harvest Imaging Newsletter was issued.  I got a lot of positive reactions about it, although the Newsletter is very simple and short.  Maybe these are the key success factors for such a monthly information letter.  Many people asked to be added to the mailing list, indicating that this source of information is appreciated.

In 2018 also the project about reproducibility, variability and reliability of CMOS imagers was continued.  Actually that should not be any news, because the promise was made at the beginning of the project that this project would last for 5 years.  In the meantime about 2500 cameras are being characterized (not all different ones !) and some of the obtained results are pretty surprising, especially if one comes with a strong background in CCD technology.  In general CMOS image sensors are much, much more robust than CCDs ever were.  And actually here you could see the incredible advantage of using a kind of standard process (being CMOS) instead of a dedicated process (being CCD).  And I do know that a modern CIS process is no longer the same as a standard CMOS process, but the non-imaging parts of a CIS process are still more or less a copy of a standard CMOS process.  As an example, gate protection circuitry on a CMOS image sensor is thousand times better and more robust than a CCD gate protection circuit.  And these type of characteristics immediately translate in a more robust overall product.

On a more scientific level, 2018 was a year without an International Image Sensor Workshop, but in 2018 there was an outstanding imaging session at the IEDM.  Very interesting stuff was presented and that actually raises the expectations for great papers at the next Workshop to be held in June 2019 in Snowbird (UT).  This is actually the same location as we were 6 years ago in 2013.

The core business of Harvest Imaging remains the technical training and technical courses conducted in the field of digital imaging.  Also in the 2018 several open courses as well as in-house courses were organized.  On top of that, CEI started also with on-line courses in 2018.  CEI recorded 3 of my courses and made them available on-line.  All 3 courses have to do with characterization of noise in an image sensor.  A bit along the same line : in January 2018, IEEE-SSCS broadcasted a webinar, this was a recording of one of my presentations on noise as well.  According to IEEE this was an overwhelming success they had never seen before.  Almost 1000 people registered up-front to attend the webinar.  Quite funny : while you sit in your office behind your PC to answer questions, about 1000 people from really all over the world are listening to the webinar.

Finally something about the yearly Harvest Imaging Forum 2018 : for the very first time, the forum had 2 speakers, one for every day.  Prof. Marian Verhelst talked about deep neural networks for imaging applications, and prof. Wilfried Philips spent one day on image fusion.  Both presentation were very well received, and for sure also in 2019 a new Harvest Imaging Forum will be organized.  First contacts with possible speakers were already made, but it is not that easy to find people that can “entertain” a group of imaging professionals during two days.  Candidate speakers are welcome to contact me !

Unfortunately 2018 was not only good news.  The digital imaging world lost one of their members due to an accident.  Arnaud Darmont passed away while he was on a business trip in the USA.  Arnaud was the CEO/founder of Aphesa, a small consulting company also located in Belgium.  Recently Arnaud concentrated mainly on standardization topics, he also was the chair of the EMVA1288 standard.  Arnaud was only known as the conference chair of the Electronic Imaging Conference on Image Sensors.

So now that 2018 is almost completed, the question is : “What will 2019 bring ?”.  The technical courses and trainings will continue, that is for sure.  Several courses are already booked, and I do see a trend of moving from open courses (to which everyone can register) to in-house courses (with a dedicated agenda and tailored to the needs of the customer).  Characterization of the CMOS cameras will continue, with special focus on UV-radiation.  What will UV-radiation do on the characteristics of a CMOS imager ?  First tests have started, but there is still quite some work to do before a decent answer can be given to this reliability test.  In other words : Harvest Imaging is ready to embrace 2019 in the same way as it did with the previous years.

Wishing all my readers a Merry Christmas and a Happy New Year.  “See” you soon.

Albert, 21-12-2018.

 

Harvest Imaging Forum 2018 : detailed agenda of Day 2

Saturday, November 10th, 2018

On december 6th the 2-days Harvest Imaging Forum will start.  On the first day prof. Wilfried Philips  will talk about “Data and Image Fusion”.

Here is the detailed agenda for the second day :

Day 2 9:00 – 10:30 Introduction, Data fusion : principles and theory (Bayesian estimation, priors and likelihood, Bayesian fusion, Application to image processing), Modeling structures in images
10:30 – 11:00 Break
11:00 – 12:30 Multi-sensor, multi-modal fusion : collacted sensors (color and grey scale fusion, hyperspectral + RGB fusion, multi-focal fusion, temporal fusion : HDR)
12:30 – 13:45 Lunch break
13:45 – 15:15 Inter camera pixel fusion : nearby sensors (fusion of heterogeneous sources, handling differences in viewpoint, stero and many view stereo, view synthesis and inpainting), Geometric fusion : spatially distributed sensors (multi-view geometry, image stitching, simultaneous localization and mapping)
15:15 – 15:45 Break
15:45 – 17:15 Fusion in camera networks (fusion strategies, cooperative and assistive multi-camera tracking, multi-camera calibration)
17:15 – 17:30 Closure of the forum

Albert, 10-11-2018.

40 YEARS AGO (3)

Friday, October 26th, 2018

We are writing 1978, and I finished my first year of my PhD project.  In the first months of my PhD I spent a lot of time on reading about transparent conductive gate electrodes.  The intention of my research project was to replace the classical poly-silicon gate electrodes on top of a CCD by means of something that is more transparent as well as more conductive than the poly-silicon gates used on top of the CCDs.  Poly-silicon gates do absorb the incoming light, and especially in the blue part of the visible spectrum.  I found an incredible useful publication by J. Vossen of RCA about Indium-Tin-Oxide or ITO, published in Thin Solid Films.  Vossen described the fundamental characteristics of ITO and explained how the characteristics of the film could be changed and optimized.  I have been reading this paper over and over again, I think I knew the whole thing by heart.

In the lab where I was working, I found an old RF-sputtering system that was no longer in use.  So that piece of equipment could be used to deposit ITO layers, at least to show the feasibility of using ITO to replace poly-silicon.  The RF-sputtering tool was refurbished where needed and a special indium-tin target was ordered.  The size of the target was only 10 cm in diameter (cost !) and the substrates to be covered by the sputtered layer were just 2 cm in diameter (cost !).  But it worked !!  The sputtering of the pure metal In-Sn layer was done in 100 % oxygen and resulted in a pure In2O3-SnO2 layer on the substrates.  The deposition rate was very low, but for the first experiments it was acceptable.  Unfortunately the sputtering in pure oxygen resulted in a non-conducting, but a fully transparent film.  The next step was the development of an annealing step at higher temperatures to make the films conductive.  Several atmospheres and temperatures were tried out.  To avoid any cross-contamination with other material in the clear room, a dedicated (pretty dirty) furnace was used outside the clean room to do the basic annealing experiments.  Nitrogen, argon, hydrogen, they all had a positive effect on the conductivity of the ITO films.  Finally we chose for forming gas (90 % N + 10 % H) at 425 deg.C, being also the very last temperature step in our CCD process at that time.  So in this way we could deposit non-conducting ITO on the CCD and by the so-called sintering step needed for the aluminum interconnects, also the ITO became conductive.  No extra processing was needed to get a conducting ITO film.

It was really a lot of fun doing the research on ITO for CCD gate electrodes.  No big obstacles were encountered (at least not in the first years of the research) and every experiment brought us a step forward.  Amazing to see that with the old equipment, but for sure with the Vossen’s publication in my mind, thin films of transparent and conductive ITO could be realized.  The next step forward was to deposit the ITO on top of MOS capacitors and to see whether the new structures (ITO-SiO2-Si) could be used in the MOS technology.

Unfortunately in the early days of my experiments, also a bad accident happened.  I was not carefull enough during the cleaning of the substrates and a few drops of diluted (luckely !!) sulferic acid landed in my eye.  After an intense eye-shower a colleague (thansk Eddy !) brought me quickly to the hospital and I was lucky that all the  initial damage to my eye could be healed and no permanent issues were left.  So the lesson learned here : never forget the safely rules, for sure not if you become too much focussed on your research and/or become too enthousiastic about the results obtained.

Albert, 26-10-2018.

Harvest Imaging Forum 2018 : detailed agenda of Day 1

Thursday, October 18th, 2018

On december 6th the 2-days Harvest Imaging Forum will start.  On the first day prof. Marian Verhelst will talk about “Efficient Embedded Deep Learning for Vision Applications”.

Here is the detailed agenda for the first day :

Day 1 9:00 – 9:15 Introduction to the forum
9:15 – 10:45 Deep learning algorithms : from neural networks to
deep neural network; training and inference with
deep NN; types of deep NN; key enablers & challenges for embedded deep NN for imaging
10:45 – 11:15 Break
11:15 – 12:45 Computer architectures for deep NN inference : data paths exploiting data reuse & systolic arrays; processor & memory architectures exploiting sparsity
12:45 – 14:00 Lunch break
14:00 – 15:30 Reduced precision computations; floating point to fixed point to resolution adaptive processors
15:30 – 16:00 Break
16:00 – 17:30 Algorithm-hardware co-optimization for efficient embedded neural networks; trends and outlook in the digital imaging domain
19:00 – 20:30 Dinner

The detailed agenda for day 2 will shortly follow.

Albert, 18-10-18.

Announcement Harvest Imaging Forum 2018

Friday, June 1st, 2018

I am happy to announce the upcoming Harvest Imaging Forum 2018.  For the first time the Harvest Imaging forum will have 2 speakers and 2 topics.  The following world-level experts in their fields will each give a 1-day presentation at the forum :

prof. dr. Marian VERHELST (KU Leuven, B), she will talk about : “Efficient Embedded Deep Learning for Vision Application”

and

prof. dr. Wilfried PHILIPS (Univ. Ghent, B) he will talk about : “Image and Data Fusion”.

Although both topics can be treated totally independent of each other, both subjects are very strongly linked to the camera architecture of the future.

At this moment two sessions of the forum are scheduled : the first session on Dec. 5th (prof. Verhelst) and Dec. 6th (prof. Philips), 2018, and the second session on Dec. 6th (prof. Verhelst) and Dec. 7th (prof. Philips), 2018.  The forum will be held in Delft, the Netherlands.  More information about the exact content of the forum as well as the possibilities to register for the forum will come soon on the website of Harvest Imaging.

 

Albert, 01-06-2018.

Harvest Imaging Forum 2018

Sunday, April 29th, 2018

The preparations for the next Harvest Imaging Forum already started several weeks ago.  Recently I have decided to change the format slightly.  All 5 foregoing Harvest Imaging Forums had 1 speaker/1 topic covering the two days.  This year we will have 2 speakers for 2 topics (which are a related to each other), but still covering two days.  At this moment two sessions are planned, the first one on December 5th and 6th, the second one on December 6th and 7th (yes, the two sessions overlap each other by 1 day).  Every session will allow maximum 30 participants.  At this moment the very last details are being discussed with the speakers.  As soon as everything is finalized, the topics and speakers will be announced.

Another change for 2018 is the choice of the hotel.  Unfortunately the hotel in Voorburg that was used for the last 4 editions is closed.  For that reason the Harvest Imaging Forum will move to the Westcord Hotel in Delft.

So already mark your agendas for December 2018 !

Albert, 29-04-2018.

ISSCC2018 (3)

Thursday, February 15th, 2018

Yoshioka (Toshiba) presented a 20 channel TDC/ADC Hybrid SoC with 200m ranging LiDAR.  The goal of this work was to develop a LiDAR system that can work for short distances (urban driving situations) as well as for larger distances, up to 200m (high-way situations, but not for Germany, there you need even larger distances !).  The complete system relies on a “Smart Accumulation Technique” which was explained during the presentation, but which is too complicated to explain here in a few words.  The advantage of this SAT method is the combination of working at longer distances while maintaining image quality.  Special attention was given to the choice of the LiDAR quantizer circuitry.  A TDC is fast and consumes a small area, but acquires only the ToF info, while an ADC is compatible with the proposed SAT method, but overall a ADC is slow and consumes a large area.  But if it is difficult to make a choice, then implement both !  And that is what is done in this hybrid concept : the TDC is used for short distances (0 … 20 m), the ADC is used for larger distances (20 … 200 m).

 

Bamji (Microsoft) presented a kind of follow-on of the work presented a few years at ISSCC and very well described in JSSC (they got the Walter Kosonocky Award for this publication).  A 1 Mpixel ToF image sensor showed impressive results : 3.5 um pixels, GS, 320 MHz demodulation, modulation contrast at 320 MHz is 78 %, QE of 44 % at 860 nm, and with a per-pixel AGC to obtain HDR.  The device is realized in 65 nm BSI (1P8M) process.  Unfortunately it was not explained (even not in Q&A) how the GS with an efficiency of 99.8 % is realized.

 

Ximenes (TU Delft) presented a 256×256 SPAD based ToF sensor.  The device is made in 45 nm (then you should be able to guess who fabricated the chip) and is stacked on a 65 nm logic chip.  It was claimed that for the very first time the digital supporting part of the detector was fully digitally synthesized.  Pixel pitch is 19.8 um with a fill factor of 31.3 %.  The overall system allows a distance range of 150 … 430 m with a precision in the order of 0.1 % and an accuracy of around 0.3 %.

 

Gasparini (FDK) closed the session with a talk about a 32×32 pixel array for quantum physics applications.  The presenter tried to explain the concept of entangled photons, but I am not sure whether all people in the audience understood this concept after a full day of presentations.  The sensor is based on SPADs, each with its own quenching circuit and TDC.  The chip was realized in a low-cost 150 nm 1P6M CMOS standard technology, pixel size 44.64 um (with in-pixel time-stamping, not surprising if David Stoppa is one of the co-authors) and almost 20 % fill-factor.  Implemented are features to skip rows and frames to speed the overall imaging system.

 

Overall the imaging session of the ISSCC2018 was one of high quality, very well prepared presentations, great sheets (16:9 for the first time at ISSCC).  There was a large audience (the largest ever for the image sensor session at this conference), and during the Q&A long waiting lines were piling up in front of the microphone.

Take away message : everything goes faster, lower supply voltages, lower power consumption, stacking is becoming the key technology, and apparently, the end of the developments in our field is not yet near !  The future looks bright for the imaging engineers !!!

Albert, 15-02-2018.

ISSCC2018 (2)

Wednesday, February 14th, 2018

Kumagai (Sony) talked about about a 3.9 MPixel Event-Driven BSI stacked CIS.  It is not the kind of sensor that is being researched by the group of Tobi Delbruck, neither what is being done at Chronocam.  But in this new device, the data is reduced drastically by on-chip binning, column wise as well as row wise.  The overall resolution of 3.9 Mpixels is reduced to on 16×5 macro pixels.  In this “macro” pixel mode, the power consumption is drastically reduced as well, and the sensor behaves in a sort of sleeping mode.  Once the sensor detects any motion in the image (by means of frame differencing), the device wakes up and switches to the full resolution mode.  Also in the full resolution mode, the CIS works at 1.8 V supply voltage.  So that keeps the power consumption low, also in full resolution.  The device is realized in 90 nm 4CU CIS technology, on top of a 40 nm 1Al6Cu logic chip.  Pixels are 1.5 um x 1.5 um.  In full resolution, 60 fps, 10 bits, the device consumes 95 mW.  In sensing 16×5 macro pixel mode, the power is lowered to 1.1 mW at 8 bits and 10 fps.  Random noise is 1.8 e, resulting in a dynamic range of 67 dB at 10 bits and full resolution, and of 96 dB in the sensing 16 x 5 mode.

 

Chou (TSMC) explained the ins and outs of a 1.1 um CIS 13.5 Mpixel, 34 fps with switching options to 514 fps at 720p, 230 fps at 1080p and 58fps at 2160p.  The basic idea is to skip columns in the reduced resolution modes, while still using the full bank of column-level ADCs.  In this way 2 or 3 rows can be read out at the same time which increases the frame rate.  Because the different options to connect the columns to the ADCs, the interconnect is a bit complex, but of course the design and lay-out of the device has to be done only once.  Some numbers : technology used for the sensor is 45 nm 1P4M, for the logic 65 nm 1P5M.  Noise is 1.8 e, column FPN 0.28 e, full well 4458 e, resulting in a dynamic range of 67.5 dB.

Yasue (NHK) presented a new 8K4K device for ultra-HD broadcasting applications.  Needs to be ready to provide us with super quality slow motion pictures of Tokyo 2020 !  The sensor runs in a progressive mode.  Key characteristics are low noise (for that reason a 3-stage pipeline ADC is used), duplicated source followers with parallel operation (to speed up the device) and an ultimate speed of 480 fps (to realize the super slomo option).  The pipeline ADC consists of a folding integration stage, a cyclic stage and a SAR stage.  In 120 fps mode, the ADC works at 14 bits (noise is 3.2 e, 12.5 W), in 240 fps mode, the ADC works in 12 bits (noise is 4.3 e, 9.8 W)  and finally in the 480 fps mode, the ADC works in 10 bits (noise is 24 e, 9 W).  Apparently the specs are almost met in the 120 fps mode (target was 3 e of noise), but there seems to be room to improve at 480 fps.  Maybe next year’s ISSCC ?

 

Albert, 14-02-2018.

ISSCC 2018 (1)

Tuesday, February 13th, 2018

Sakakibara (Sony) presented a paper about a BSI-GS CMOS imager with pixel-parallel 14b ADC.  One can make a global shutter in a CMOS sensor in the charge domain, in the voltage domain, but also in the digital domain.  The latter requires an ADC per pixel (also known as DPS : digital pixel sensor).  And this paper describes such a solution : a stacked image sensor with per pixel a single ADC.  Based on the recent history of Sony technology, it could be expected that this technology was coming.  The ADC (per pixel !) is a single slope ADC with a comparator and a latch per pixel.  The design of the pixel is such that the source follower is already part of the comparator.  That makes the structure very compact, but requires two Cu-Cu contacts between top and bottom layer per pixel.  To get rid of all the data generated by all these ADCs, a pretty complex data line structure is implemented.  The technologies used : 90 nm 1P4M for the top layer, 65 nm 1P7M for the bottom layer.  Pixel size is 6.9 um x 6.9 um, 1.46 Mpixels, noise level 5.15 e in a high power mode of 746 mW @ 660 fps or 8.77 e in a low power mode of 654 mW @ 660 fps.  Dynamic range for the two modes is respectively 70.2 dB and 65.7 dB.  PLS for this global shutter is -75 dB.

 

Nishimura of Panasonic talked about the organic-photoconductive film GS CIS with an in-pixel noise canceller.  It is not the first time that this technology is presented at ISSCC, but this time an extra noise cancellation “trick”  is applied in the pixel to lower the noise.  Do not forget that this pixel is basically a 3T pixel that suffers from kTC noise.  A similar noise cancellation method was applied as what we have seen earlier with the so-called “active reset”, but no longer on column level, this time on pixel level. Key advantage of this device is the GS mode with very good PLS (- 110 dB), tunable sensitivity by biasing the right voltage across the photoconductor, very high saturation level.  The paper claims that the reset noise is lowered by a factor of 10, while the saturation level is increased by a factor of 10 (but the high saturation mode cannot be combined with the low noise level).  The pixel size is 3 um x 3 um, for a 8192 x 4320 pixels, 60 fps, 12 bit ADC, 65 nm process technology 1P4Cu1Al, noise of 8.6 e (in the proceedings, not consistent with the presentation where it was mentioned 4.5 e).  But again not a single indication about dark current and dark non-uniformities.  During the presentation as well as after the session, super quality images were shown, but all was prerecorded.

 

Choi of Samsung presented 24 Mpixel CIS with a pixel size of 0.9 um, using full-depth deep-trench isolation.  As can be expected, a 0.9 um pixel will suffer from full-well limitations, crosstalk and sensitivity issues.  Unless, a thickness silicon layer (40 % thicker, going from 3 um to 4.25 um, these last numbers are guesses !) is used (for this BSI device) in combination with DTI that goes all the way through the silicon.  So basically you create an isolated island for every single pixel (in the talk it was mentioned that if the pixels are fully isolated from each other by the DTIs, they can be operated at a lower voltage, which is in its turn beneficial for dark current and dark non-uniformities).  The sensor is using stacking with TSVs, and apparently the trenches are filled (with poly-Si ? not sure about this) and are biased with a negative voltage to keep the dark current low.  All techniques mentioned are not new, but their combination for a 0.9 um is new.  The author made a comparison between this new 0.9 um pixel and an existing 1.0 um pixel, and the new one beats the old pixel is all aspects : full well 6000 e, dark temporal noise 1.4 e, dynamic range 64.9 dB, dark current (60 deg.C) 2 e/s.  A strong reduction in white spots and RTS pixels is mentioned in the overview table, but it is not clear what is exactly done in the technology to come to these levels.

 

Albert, 13-02-2018.