CCD and CMOS
First of all, we need to know what CMOS and CCD stand for respectively.
CMOS is actually the abbreviation of complementary metal oxide semiconductor, which is called complementary metal oxide semiconductor in Chinese. CCD is the abbreviation of charge coupled device, which means charge coupled device. Are you embarrassed? CMOS and CCD are more pleasing to the ear.
The name of CCD sensor comes from how to read the charge after capturing the image. By using special manufacturing technology, the sensor can transmit the accumulated charge without affecting the image quality. The whole pixel area can be regarded as a matrix, and each matrix unit is a pixel.
Microstructure of 0 1, CMOS and CCD
The basic photosensitive unit of CCD is MOS (Metal Oxide Semiconductor Capacitor), which is used as photodiode and memory device.
A typical CCD device has four layers: (a) boron-doped silicon substrate, (b) channel stop layer, (c) silicon dioxide layer, and (d) polysilicon gate electrode for control. When the gate voltage is high, a potential well will be generated under the oxide layer. The incident photons can excite electrons in the potential well, which can be collected and guided, and the surrounding doped region can prevent the excited electrons from leaking.
The image generated by CCD camera can be divided into four main stages or functions: charge generation through the interaction between photons and photosensitive area of the device, charge collection and storage, charge transfer and charge measurement.
① Generation of signal charge: The first step in the working process of CCD is the generation of charge. CCD can convert the incident light signal into charge output, which is based on the photoelectric effect (photovoltaic effect) inside the semiconductor.
② Storage of signal charges: The second step of CCD working process is the collection of signal charges, that is, the process of collecting the charges excited by incident photons into signal charge packets.
③ Transmission (coupling) of signal charge: The third step in the working process of CCD is the transmission of signal charge packets, that is, the process of transmitting the collected charge packets from one pixel to the next until all the charge packets are output.
④ Detection of signal charge: The fourth step in the working process of CCD is charge detection, that is, the process of converting the charge transferred to the output stage into current or voltage.
Microstructure of CMOS: The biggest difference between CMOS and CCD is the different way of charge transmission. CMOS uses metal lines for transmission. Schematic diagram of CMOS pixel operation. The sensor pixel (reverse bias diode) is connected to the pixel electronics in the readout chip.
02, CMOS and CCD sensor working principle
CMOS appearance: including pixels, digital logic circuits, signal processors, clock controllers, etc.
CCD appearance: It includes horizontal and vertical shift registers, clock controllers for the horizontal and vertical shift registers, and output amplifiers. To abstract these two kinds of sensors, there are the following two circuit diagrams.
Schematic diagram of CCD sensor. CCD is essentially a large array of semiconductor "barrels", which can convert incident photons into electrons and keep the accumulated charges. These charges can be transferred down to the horizontal shift register through the vertical shift register, and the horizontal shift register can convert the charges into voltage output.
Schematic diagram of CMOS sensor. The design of complementary metal oxide semiconductor is not to transfer the charge bucket, but to immediately convert the charge into voltage and output the voltage on the micro-line.
Schematic diagram of CMOS image sensor. The CCD converts charge into voltage at the end of the process, while the CMOS sensor performs this conversion at the beginning (because each pixel contains a voltage converter). Then output voltage through small and energy-saving microfilament.
Full-width CCD is the simplest sensor and can be produced with very high resolution. They only have a single-wire transmission register as a buffer, and the shutter speed cannot be set by the sensor control. Therefore, the sensor must be located behind the mechanical shutter, because the surface of the photosensitive sensor can only be exposed during the exposure time. Full-frame CCD is mainly used for photographic purposes in science and astronomy.
At the end of the exposure time, the charges from the sensor unit are simultaneously transferred to the intermediate memories of all pixels and read out from them by vertical and horizontal displacement. The advantage of line-to-line transmission CCD is that it can receive the image information from the sensor unit quickly and completely without mechanical lock for intermediate storage. The disadvantage of this design is that the filling coefficient of the sensor is low, which will lead to the decrease of sensitivity to light, or it is easier to produce noise in weak light.
After exposure, the charge in the stored image or cell will be transferred to the transfer register very quickly. Then, the charge is read from the transfer register in the same way as the full-frame CCD.
Combine the principle of line-to-line and full-width CCD. With this structure, the charge of the active sensor unit can be transferred to the intermediate storage unit very quickly, and from there, it can be transferred to the completely opaque transfer register equally quickly. Regarding the working principle of CCD, there is a classic metaphor of regional rain measurement.
The serial readout mode of CCD can be indicated by measuring the regional rainfall. The rainfall intensity falling on the bucket array may vary from place to place. Similar to the incident photons on the imaging sensor, these buckets collect different amounts of signals (water) during integration, and the buckets are transferred to empty buckets representing a serial bucket array on the conveyor belt. An entire row of buckets is moved to the serial register bank in parallel.
Serial shift and read-out operations, in which rainwater accumulated in each bucket is depicted as being sequentially transferred to a calibrated measuring container, which is similar to a CCD output amplifier. When the contents of all containers on the serial conveyor belt are measured sequentially, another parallel register shift transfers the contents of the next collection bucket to the serial recording container, and the process is repeated until the contents of each bucket (pixel) are measured.
03. Conclusion
With the previous understanding, we can draw a conclusion directly. The main difference between CCD and CMOS sensor lies in the way of processing each pixel: CCD moves the photo-generated charge from one pixel to another and converts it into voltage at the output node. CMOS imagers use multiple transistors on each pixel to convert the charge in each pixel into voltage to amplify and move the charge using more traditional wires.
The difference between CCD and CMOS sensor is that the charge generated by CCD pixels needs to be registered in the vertical register first, then transferred to the horizontal register branch by branch, and finally the charge of each pixel is measured and the output signal is amplified respectively. The CMOS sensor can generate voltage at each pixel, and then transmit it to the amplifier output through the metal wire, which is faster.
CCD moves the photo-generated charge from one pixel to another and converts it into voltage at the output node. CMOS imagers use multiple transistors on each pixel to convert the charge in each pixel into voltage to amplify and move the charge using more traditional wires.
CCDVSCMOS .
Compared with CCD, CMOS has some obvious advantages:
CMOS sensor has faster data retrieval speed than CCD. In CMOS, each pixel is amplified separately, instead of processing data at the common node of CCD. This means that each pixel has its own amplifier, and the noise consumed by the processor can be turned down at the pixel level and then amplified to obtain higher definition, instead of amplifying the original data of each pixel at one time at the end node.
CMOS sensor is more energy-saving and has lower production cost. They can be manufactured by reusing existing semiconductors. The power consumption of these circuits is also lower than that of high-voltage analog circuits in CCD. The image quality of CCD sensor is better than that of CMOS sensor. But CMOS sensor is superior to CCD sensor in power consumption and price.
Read CMOS image sensor
1873, scientists Yue Se May and Willoughby Smith Miff discovered that selenium crystals can generate electricity when exposed to light. Thus, the development of electronic images began, and with the evolution of technology, the performance of image sensors gradually improved. 20th century1.50s-photomultiplier tube (PMT) appeared. 2.1965 ——1970, IBM and Fairchild developed photoelectric and bipolar diode arrays. 3. 1970, CCD image sensor was invented in Bell Laboratories, and became the dominant image sensor market with its high quantum efficiency, high sensitivity, low dark current, high consistency and low noise. In the late 1990s, we entered the CMOS era.
CCD camera for international space station
1. 1997, the Cassini international space station used CCD cameras (wide angle and narrow angle).
2. Daniel Golding, director of NASA, praised CCD camera as "faster, better and cheaper"; It is said that reducing the mass, power and cost of future spacecraft will require miniaturized cameras. Electronic integration is a good way to miniaturize. MOS-based image sensors are equipped with passive pixels and active pixels (3T).
Historical evolution of image sensor-CMOS image sensor
1.CMOS image sensor makes "chip camera" possible, and the trend of camera miniaturization is obvious.
2. In 2007, the appearance of Siimpel AF camera model marked a major breakthrough in camera miniaturization.
3. The rise of chip camera provides new opportunities for technological innovation in many fields (automobile, military and aerospace, medical care, industrial manufacturing, mobile phone photography, security).
CMOS image sensor will be commercialized soon.
1.1.995 Photobit Company was established in February to commercialize CMOS image sensor technology.
2. 1995-200 1 year, the number of Photobit has increased to about 135, which mainly includes: customized design contracts raised by private enterprises, important support from SBIR (National Aeronautics and Space Administration/Department of Defense) and investment from strategic business partners. During this period, * * * submitted more than 100 new patent applications.
3. After commercialization, CMOS image sensors have developed rapidly and have broad application prospects, and gradually replacing CCD has become a new trend.
Wide application of CMOS image sensor
200 1, 1 1, Photobit was acquired by Micron Technology Company and was allowed to return to California Institute of Technology. At the same time, by 200 1, dozens of competitors appeared, such as Toshiba, stmicroelectronics, Omnivision and CMOS image sensor business, partly because of early efforts to promote the transformation of technological achievements. Later, Sony and Samsung became the first and second in the global market respectively. Later, Micron split Aptina, Aptina was acquired by ON Semi, and currently ranks fourth. CMOS sensor has gradually become the mainstream in the field of photography and has been widely used in many occasions.
Development course of CMOS image sensor
1970s: Fairchild, 1980s: Hitachi, early 1980s: Sony, the invention of 197 1:FDA &; CDS technology. Mid-1980s: achieving a major breakthrough in the consumer market; 1990: NHK/Olympus, Enlarged MOS Imager (AMI), namely CIS, 1993: JPL, CMOS active pixel sensor, 1998: Monolithic camera, after 2005: CMOS image sensor became the mainstream.
Brief introduction of CMOS image sensor technology
cmos image sensor
CMOS image sensor is the integration of analog circuit and digital circuit. It is mainly composed of microlens, color filter (CF), photodiode (PD) and pixel design.
1. microlens: with spherical lens and reticular lens; When the light passes through the microlens, the inactive part of CIS is responsible for collecting the light and focusing it on the color filter.
2. Color filter (CF): The red, green and blue (RGB) components in the reflected light are separated, and the Bayer array filter is composed of photosensitive elements.
3. Photodiode (PD): As a photoelectric conversion device, it captures light and converts it into current; Generally made of PIN diode or PN junction device.
4. Pixel design: realized by active pixel sensor (APS) assembled on CIS. APS usually consists of 3 to 6 transistors. It can acquire or buffer pixels from a large capacitor array, and convert photocurrent into voltage inside pixels, with perfect sensitivity level and good noise figure.
Bayer array filters and pixels
1. Each square on the photosensitive element represents a pixel block, and a color filter (CF) is attached to it. After the RGB components in the reflected light are separated by CF, a Bayer array filter is formed by photosensitive elements. The classic Bayer array is imaged in 2 x2 * * * RGB four grids, while the Quad Bayer array is expanded to 4x4, and RGB are arranged adjacently in 2x2. The mechanical engineering literature works of WeChat official account, the gas station of engineers!
2. Pixel, that is, the number of pixels under bright or dark light conditions, is the basic unit of digital display, and its essence is an abstract sampling, which is represented by colored squares.
3. The pixel shown is filled with three primary colors: R (red), G (green) and B (blue). The length of each small pixel block refers to the pixel size, and the illustrated size is 0.8 micron. ..
Bayer array filters and pixels
Each small square on the filter corresponds to the pixel block of the photosensitive element, that is, a specific color filter is covered in front of each pixel. For example, the red filter block only allows red light to project on the photosensitive element, so the corresponding pixel block only reflects the information of red light. Later, you need to restore the color to guess the color, and finally form a complete color photo. The whole process from photosensitive element to Bayer filter to color reproduction is called Bayer array.
Front-illuminated (FSI) and Back-illuminated (BSI)
In the early CIS, FSI(FRONT-SIDE ILLUMINATED) technology was adopted, and metal (aluminum, copper) regions were mixed between Bayer array filter and photodiode (PD). The existence of a large number of metal wires greatly interferes with the light entering the sensor surface, so that a considerable part of the light cannot enter the next layer of photodiode (PD), and the signal-to-noise ratio is low. After technical improvement, under the BSI structure, the metal (aluminum, copper) area is transferred to the back of the photodiode (PD), which means that the light collected by the Bayer array filter is no longer blocked by many metal wires, and the light can directly enter the photodiode; BSI can not only greatly improve the signal-to-noise ratio, but also improve the reading speed of sensors with more complex circuits and larger scales.
CIS parameter-frame rate
Frame rate: the frequency at which bitmap images appear continuously on the display in units of frames, that is, how many pictures can be displayed per second. In order to realize the design of high pixel CIS, the design of analog circuit is very important. Without a matching high-speed readout and processing circuit, there is no way to output at a high frame rate.
Sony released the first Exmor sensor in Sichuan and Jiangxi as early as 2007. Exmor sensor is equipped with independent ADC analog-to-digital converter under each column of pixels, which means that analog-to-digital conversion can be completed on CIS chip, which effectively reduces noise, greatly improves reading speed and simplifies PCB design.
Application of CMOS image sensor
Global market scale of CMOS image sensor
20 17 is the high growth point of CMOS image sensor, with a year-on-year increase of 20%. In 20 18, the global CIS market was15.5 billion USD, and it is expected to increase by 10% in 20 19, reaching1700 million USD. At present, the CIS market is in a period of steady growth. It is estimated that the market will be gradually saturated in 2024, and the market scale will reach 24 billion US dollars.
CIS application-vehicle field
The applications of 1.CIS in vehicle field include: rearview camera (RVC), panoramic system (SVS), camera monitoring system (CMS), FV/MV, DMS/IMS system.
2. The global sales of automobile image sensors are increasing year by year.
3. The rearview camera (RVC) is the main sales force, showing a steady growth trend. In 20 16, the global sales volume was 5 1 10,000, 60 million in 20 18, 65 million in 20 19, and exceeded 70 million in 2020.
4.FV/MV's's global sales increased rapidly, with 20 16 years100000 vehicles and 30 million vehicles in 20 18 years. After that, it is expected that FV/MV will still maintain a high-speed growth trend, with sales of 40 million vehicles in 0 19 and 75 million vehicles in 20021year.
In-vehicle field -HDR technical method
1.HDR solution, that is, high dynamic range imaging, is used to achieve a larger exposure dynamic range than ordinary digital imaging technology.
2. Time multiplexing. The same pixel array depicts multiple boundaries by using multiple scrolling screens (staggered HDR). Advantages: HDR scheme is the simplest pixel technology compatible with traditional sensors. Disadvantages: Captures that occur at different times can lead to motion artifacts.
3. Spatial multiplexing. A single pixel array frame is decomposed into multiple frames and captured by different methods: 1. Independent exposure control at pixel or row level. Advantages: Single frame motion artifacts are less than interlaced motion artifacts. Disadvantages: loss of resolution, motion artifacts and edges. 2. Each pixel uses multiple photodiodes of the same microlens. Advantages: single multi-capture frame has no motion artifacts; Disadvantages: In terms of equivalent pixel area, the sensitivity is reduced.
4. Very large oil well productivity.