What is the difference between CMOS and CCD image sensors? In intelligent manufacturing, automation and other equipment, machine vision is indispensable, and when it comes to machine vision, image sensors are indispensable. For decades, CCD and CMOS technologies have been competing for image sensor dominance. So what is the difference between these two sensors? Today we will share it.
CCD VS CMOS
First of all, we need to clarify what CMOS and CCD mean.
CMOS is actually the abbreviation of Complementary Metal Oxide Semiconductor, which is called complementary metal oxide semiconductor in Chinese. CCD is the abbreviation of Charge-Coupled Device, which means charge-coupled device. Do you find it difficult to speak? Or CMOS and CCD are more pleasing to the ear.
CCD sensors get their name from how the charge is read after the image is captured. Using a special manufacturing process, the sensor is able to transfer accumulated charge without affecting image quality. The entire pixel area can be regarded as a matrix, and each matrix unit is a pixel.
01. Microstructure of CMOS and CCD
The basic photosensitive unit of CCD is a metal oxide semiconductor capacitor (MOS= Metal Oxide Semiconductor Capacity), which is used as a photodiode and storage equipment.
A typical CCD device has four layers: (a) bottom boron-doped silicon substrate (Silicon Substrate), (b) channel stop layer (Channel Stop), (c) oxide layer (Silicon Dioxide) and (d) gate electrode for control (Polysilicon Gate Electrode). When the gate voltage is high, a potential well is created under the oxide layer. Incoming photons can excite electrons in the potential well, and these electrons can be collected and guided. The surrounding doped region prevents the excited electrons from leaking.
Using a CCD camera to generate images can be divided into four main stages or functions: charge generation through the interaction of photons with the light-sensitive area of ??the device, collection and storage of the released charge, charge transfer, and charge measurement.
① Generation of signal charge: The first step in the CCD working process is the generation of charge. CCD can convert incident light signals into charge output, based on the internal photoelectric effect (photovoltaic effect) of semiconductors.
②Storage of signal charges: The second step in the CCD working process is the collection of signal charges, which is the process of collecting charges excited by incident photons into signal charge packets.
③Transmission (coupling) of signal charges: The third step in the CCD working process is the transfer of signal charge packets, which is to transfer the collected charge packets from one pixel to the next until The process of outputting all charge packets is completed.
④ Signal charge detection: The fourth step in the CCD working process is charge detection, which is the process of converting the charge transferred to the output stage into current or voltage.
CMOS microstructure: The biggest difference from CCD is the different way of transmitting charges. CMOS uses metal wires to transmit charges. Schematic diagram of CMOS pixel operation. The sensor pixel (a reverse-biased diode) is connected to the pixel electronics in the readout chip.
02. Working principle of CMOS and CCD sensors
CMOS appearance: including pixels, digital logic circuits, signal processors, clock controllers, etc.
CCD appearance: including horizontal and vertical shift registers, clock controllers for horizontal and vertical shift registers, and output amplifiers, etc. To abstract these two sensors, there are the following two circuit diagrams.
Schematic diagram of CCD sensor. A CCD is essentially a large array of semiconductor "barrels" that convert incoming photons into electrons and hold the accumulated charge. These charges can be transferred down to the horizontal shift register by the vertical shift register, and the horizontal shift register can convert the charge into a voltage and output it.
CMOS sensor diagram.
Instead of transporting a bucket of charge, the CMOS design instantly converts the charge into voltage and outputs the voltage on a microwire.
CMOS image sensor working diagram. CCDs convert charge into voltage at the end of the process, whereas CMOS sensors perform this conversion at the beginning (because each pixel contains a voltage converter). The voltage can then be output via compact, energy-efficient microwires.
Full-frame CCD is the simplest sensor structure and can be produced at very high resolution. They only have a single wire transfer register as a buffer and cannot set the shutter speed via sensor control. Therefore, the sensor must be located behind a mechanical shutter, since the light-sensitive sensor surface can only be exposed to light during the exposure time. Full-frame CCDs are mainly used for photographic purposes in science and astronomy.
At the end of the exposure time, the charges from the sensor cells are transferred simultaneously to the intermediate memory of all pixels and read out from there via vertical and horizontal displacements. The advantage of interline transmission CCDs is that they can receive image information from the sensor unit quickly and completely, without the need for a mechanical lock for intermediate storage. The disadvantage of this design is that the sensor has a lower fill factor, which can result in less sensitivity to light or an increased tendency to generate noise in low light.
After exposure, the stored image or charge in the cell is transferred to the transfer register very quickly. The charge is then read from the transfer register in the same way as a full frame CCD.
Combined interline and full-frame CCD principles. With this structure, the charges of the active sensor unit can be transferred very quickly to the intermediate storage unit and from there equally quickly to the completely light-tight transfer register. Regarding the working principle of CCD, there is a classic metaphor of regional rainwater measurement.
The CCD serial readout method can be demonstrated by using a bucket brigade to measure regional rainfall. The intensity of rainfall falling on the array of buckets may vary from location to location, similar to the incident photons on the imaging sensor. The buckets collect different amounts of signal (water) during integration. The buckets move up the conveyor belt to represent the serial register. A row of empty buckets in a Bucket Array. An entire row of buckets is moved in parallel into the bank of serial registers.
Serial shift and readout operation, which depicts the accumulated rainwater in each bucket being sequentially transferred into the calibrated measurement vessel, similar to a CCD output amplifier. When the contents of all containers on the serial conveyor have been measured in sequence, another parallel shift (Parallel Register Shift) transfers the contents of the next row of collection buckets to the serial recording container, repeating the process until each bucket ( pixels) are measured.
03. Conclusion
With the previous understanding, we will directly give the conclusion. The main difference between CCD and CMOS sensors is the way each pixel is processed: CCD moves photogenerated charge from one pixel to another and converts it into a voltage at the output node. CMOS imagers, which use multiple transistors on each pixel, convert the charge within each pixel into a voltage to amplify and move the charge using more traditional wires.
The difference between CCD and CMOS sensors: The charge generated by the CCD pixel needs to be stored in the vertical register first, and then transferred to the horizontal register in separate lines. Finally, the charge of each pixel is measured individually and sequentially and the output signal is amplified. . The CMOS sensor can generate voltage in each pixel and then transmit it to the amplifier output through metal wires, which is faster.
CCD moves photogenerated charge from one pixel to another and converts it into voltage at the output node. CMOS imagers, which use multiple transistors on each pixel, convert the charge within each pixel into a voltage to amplify and move the charge using more traditional wires.
CCDVSCMOS.
CMOS has some clear advantages over CCD:
CMOS sensors have faster data retrieval speeds than CCDs. In CMOS, each pixel is amplified individually, rather than having the data processed at a common terminal node in CCD. This means that each pixel has its own amplifier, and the noise consumed by the processor can be turned down at the pixel level and then amplified for higher definition, rather than amplifying the raw data for each pixel in one go at the end node.
CMOS sensors are more energy efficient and have lower production costs.
They can be built by repurposing existing semiconductors. These also use less power than the high-voltage analog circuits in CCDs. The image quality of CCD sensors is better than that of CMOS sensors. However, CMOS sensors are superior to CCD sensors in terms of power consumption and price.
Understand CMOS image sensors in one article
In 1873, scientists Joseph May and Willoughby Smith discovered the photosensitive selenium crystal Afterwards, electric current can be generated. From this, the development of electronic imaging began. As technology evolves, the performance of image sensors gradually improves. 1. 1950s - The appearance of the Photo Multiplier Tube (PMT). 2. From 1965 to 1970, IBM, Fairchild and other companies developed optoelectronics and bipolar diode arrays. 3. In 1970, the CCD image sensor was invented at Bell Laboratories. Relying on its high quantum efficiency, high sensitivity, low dark current, high consistency, low noise and other properties, it became the dominant image sensor market. 4.In the late 1990s, we entered the CMOS era.
The International Space Station uses CCD cameras
1. In 1997, the Cassini International Space Station used CCD cameras (wide angle and narrow angle).
2. NASA Administrator Daniel Goldin praised CCD cameras as “faster, better, and cheaper”; claiming that reducing mass, power, and cost on future spacecraft requires miniaturized cameras . Electronic integration is a good way to miniaturize, and MOS-based image sensors have passive pixel and active pixel (3T) configurations.
Historical evolution of image sensors - CMOS image sensors
1. CMOS image sensors make "chip cameras" possible, and the trend of camera miniaturization is obvious.
2. In 2007, the emergence of the Siimpel AF camera model marked a major breakthrough in camera miniaturization.
3. The rise of chip cameras has provided new opportunities for technological innovation in many fields (vehicle, military aerospace, medical, industrial manufacturing, mobile photography, security) and other fields.
CMOS image sensors move toward commercialization
1. In February 1995, Photobit was established to commercialize CMOS image sensor technology.
2. Between 1995 and 2001, Photobit grew to approximately 135 people, mainly including: self-funded custom design contracts from private companies, important support from the SBIR program (NASA/DoD), investments from strategic business partners, During this period, *** submitted more than 100 new patent applications.
3. After commercialization, CMOS image sensors have developed rapidly and have broad application prospects, gradually replacing CCD and becoming a new trend.
Wide application of CMOS image sensors
In November 2001, Photobit was acquired by Micron Technology and returned to Caltech with a license. Meanwhile, by 2001, dozens of competitors had emerged, such as Toshiba, STMicro, and Omnivision, and the CMOS image sensor business was partly due to early efforts to promote the transformation of technological achievements. Later, Sony and Samsung became the number one and number two in the global market respectively. Later, Micron spun off Aptina, which was acquired by ON Semi and currently ranks 4th. CMOS sensors have gradually become the mainstream in the field of photography and are widely used in many situations.
The development history of CMOS image sensors
1970s: Fairchild, 1980s: Hitachi, early 1980s: Sony, 1971: Invention of FDA&CDS technology. Mid-1980s: Major breakthroughs in the consumer market; 1990: NHK/Olympus, Amplified MOS Imager (AMI), also known as CIS, 1993: JPL, CMOS active pixel sensor, 1998: Single-chip camera, 2005 After: CMOS image sensors become mainstream.
Introduction to CMOS image sensor technology
CMOS image sensor
CMOS image sensor (CIS) is an integration of analog circuits and digital circuits. It is mainly composed of four components: microlens, color filter (CF), photodiode (PD), and pixel design.
1. Microlens: It has a spherical surface and a mesh lens; when light passes through the microlens, the inactive part of the CIS is responsible for collecting the light and focusing it onto the color filter.
2. Color filter (CF): splits the red, green, and blue (RGB) components in the reflected light and forms a Bayer array filter through the photosensitive element.
3. Photodiode (PD): As a photoelectric conversion device, it captures light and converts it into current; it is generally made of PIN diodes or PN junction devices.
4. Pixel design: realized through the active pixel sensor (APS) assembled on the CIS. APS is usually composed of 3 to 6 transistors, which can obtain or buffer pixels from a large capacitor array, and convert photocurrent into voltage inside the pixel, with perfect sensitivity levels and excellent noise indicators.
Bayer array filters and pixels
1. Each square on the photosensitive element represents a pixel block, with a layer of color filter (CF) attached above, and the CF is split After completing the RGB components in the reflected light, a Bayer array filter is formed through the photosensitive element. The classic Bayer array is imaged in a 2x2*** four-grid dispersed RGB manner. The Quad Bayer array is expanded to 4x4, and the RGBs are arranged adjacently in a 2x2 manner. The public account "Mechanical Engineering Literature", a gas station for engineers!
2. Pixel, that is, the number of pixels under bright or dark light conditions, is the basic unit of digital display. Its essence is an abstract sampling, which we use colored squares to represent.
3. The pixels in the illustration are filled with the three primary colors of R (red), G (green), and B (blue). The length of each small pixel block refers to the pixel size, and the illustration size is 0.8μm.
Bayer array filter and pixels
Each small square on the filter corresponds to a pixel block of the photosensitive element, that is, a specific color filter is covered in front of each pixel . For example, if a red filter block only allows red light to be projected onto the photosensitive element, then the corresponding pixel block will only reflect red light information. Later, post-production color restoration is required to guess the color, and finally form a complete color photo. The whole process of photosensitive element → Bayer filter → color restoration is called Bayer array.
Front-side illuminated (FSI) and back-illuminated (BSI)
Early CIS used front illumination technology FSI (FRONT-SIDE ILLUMINATED), Bayer array filter and There are metal (aluminum, copper) areas sandwiched between the photodiodes (PD). The existence of a large number of metal connections has great interference with the light entering the sensor surface, preventing a considerable part of the light from entering the photodiode (PD) on the next layer. , the signal-to-noise ratio is low. After technological improvements, under the structure of backside illumination technology BSI (FRONT-SIDE ILLUMINATED), the metal (aluminum, copper) area is transferred to the back of the photodiode (PD), which means that the light collected by the Bayer array filter is no longer numerous The metal connection blocks the light and allows it to directly enter the photodiode; BSI can not only greatly improve the signal-to-noise ratio, but can also be used with more complex and larger-scale circuits to increase the sensor reading speed.
CIS parameter - frame rate
Frame rate (Frame rate): the frequency with which bitmap images in frames appear continuously on the display, that is, how many images can be displayed per second picture. To realize the design of high-pixel CIS, a very important point is the Analog circuit design. When the pixels are increased, without matching high-speed readout and processing circuits, it is impossible to output them at a high frame rate.
Sony released its first Exmor sensor as early as 2007. The Exmor sensor has an independent ADC analog-to-digital converter under each column of pixels, which means that the analog-to-digital conversion can be completed on the CIS chip, effectively reducing noise, greatly increasing the reading speed, and simplifying PCB design.
Applications of CMOS image sensors
Global market size of CMOS image sensors
2017 was the high growth point for CMOS image sensors, with a year-on-year growth of 20%. In 2018, the global CIS market size was US$15.5 billion, and is expected to increase by 10% year-on-year in 2019, reaching US$17 billion. Currently, the CIS market is in a period of stable growth. It is expected that the market will gradually become saturated in 2024, with the market size reaching US$24 billion.
CIS application - vehicle field
1. CIS applications in the vehicle field include: rear view camera (RVC), all-round view system (SVS), camera monitoring system (CMS) , FV/MV, DMS/IMS system.
2. Global sales of automotive image sensors are growing year by year.
3. Rearview cameras (RVC) are the main force in sales and show a steady growth trend. Global sales in 2016 were 51 million units, 60 million units in 2018, 65 million units in 2019, and 65 million units in 2020. More than 70 million units.
4. The global sales of FV/MV are growing rapidly, with sales of 10 million units in 2016 and 30 million units in 2018. After that, it is expected that FV/MV will continue to maintain a rapid growth trend, with sales of 40 million units in 2019 , reaching 75 million units in 2021, close to RVC’s global sales.
Automotive field - HDR technology method
1. HDR solution, namely high dynamic range imaging, is used to achieve a larger exposure dynamic range than ordinary digital image technology.
2. Time multiplexing. The same pixel array depicts multiple borders by using multiple rolling shutters (staggered HDR). Benefit: HDR solution is the simplest pixel technology compatible with traditional sensors. Disadvantage: Captures occurring at different times lead to motion artifacts.
3. Space reuse. A single pixel array frame is broken down into multiple, captured via different methods: 1. Independent exposure control at pixel or row level. Advantages: Less motion artifacts in single frames than interleaved ones. Disadvantages: Loss of resolution, and motion artifacts remain at edges. 2. Each pixel uses multiple photodiodes from the same microlens. Advantages: No motion artifacts in a single multi-capture frame; Disadvantages: Reduced sensitivity from equivalent pixel areas.
4. Very large full well production capacity.