Basic principles of video coding
Video image data has a strong correlation, that is, there is a lot of redundant information. Among them, redundant information can be divided into spatial redundant information and temporal redundancy information. Compression technology is to remove redundant information from data (remove the correlation between data). Compression techniques include intra-frame image data compression, inter-frame image data compression and entropy coding compression.
Time domain redundant information
Inter-frame coding technology can remove redundant information in time domain, which includes the following three parts:
-Motion compensation
Motion compensation is to predict and compensate the current local image through the previous local image, which is an effective method to reduce redundant information of frame sequence.
-Sports performance
Images in different regions need to use different motion vectors to describe motion information. Motion vectors are compressed by entropy coding.
-Motion estimation
Motion estimation is a set of techniques to extract motion information from video sequences.
Note: Common compression standards all use block-based memc.
De-airspace redundant information
Mainly use inter-frame coding technology and entropy coding technology:
-Transform coding
Intra-frame images and predicted differential signals have high spatial redundancy information. Transform coding transforms a spatial signal into another orthogonal vector space, which reduces its correlation and data redundancy.
-Quantization coding
After transform coding, a number of transform coefficients are generated, which are quantized to make the output of the encoder reach a certain bit rate. This process leads to a decrease in accuracy.
-Entropy coding
Entropy coding is lossless coding. It further compresses the coefficients and motion information obtained after transformation and quantization.
Basic framework of video coding (Figure)
Development of International Audio and Video Compression Standards
H.26 1
H.26 1 standard is designed for ISDN, which is mainly used for real-time coding and decoding. The time delay of signal compression and decompression shall not exceed 150ms, and the code rate shall be px64kbps(p= 1~30).
H.26 1 standard mainly adopts motion compensated inter-frame prediction, DCT transform, adaptive quantization, entropy coding and other compression technologies. There are only I frames and P frames, but there is no B frame, and the accuracy of motion estimation is only at the pixel level. Supports two image scanning formats: QCIF and CIF.
263
H.263 standard is a very low bit rate international standard for image coding. On the one hand, it is based on H.26 1, with mixed coding as the core. Its basic principle block diagram is very similar to H.26 1, and the original data and code stream organization are also very similar. On the other hand, H.263 also absorbs some effective and reasonable parts of other international standards such as MPEG, such as half-pixel precision motion estimation and PB frame prediction, which makes its performance better than H.26 1.
The bit rate used by H.263 can be less than 64Kb/s, and the transmission bit rate can be variable. H.263 supports multiple resolutions: SQCIF( 128x96), QCIF, CIF, 4CIF, 16CIF.
International standards related to H.26 1 and H.263.
International standards related to H.26 1
H.320: narrowband videophone system and terminal equipment;
H.22 1: the frame structure of 64~ 1 920Kb/s channel in audio-visual telecommunication service;
H.230: frame synchronization control and indication signal of audio-visual system;
H.242: An audio-visual terminal system that uses digital channels up to 2MB/s. ..
H.263 relevant international standards
H.324: very low bit rate multimedia communication terminal equipment;
H.223: very low bit rate multimedia communication composite protocol;
H.245: multimedia communication control protocol;
G.723. 1. 1: speech encoders with transmission rates of 5.3Kb/s and 6.3 kb/s. ..
Joint image expert group
1986, the Joint Photographic Expert Group (JPEG) was established by the International Organization for Standardization, which is mainly devoted to formulating digital image compression coding standards for continuous tones, multi-level grayscales and still images. The common coding method based on discrete cosine transform (DCT) is the core of JPEG algorithm.
MPEG- 1/2
The MPEG- 1 standard is used to encode moving images and their accompanying sounds on digital memory, and its digital rate is1.5mb/s/s .. The video principle block diagram of mpeg-1is similar to that of H.26 1.
Characteristics of MPEG- 1 video compression technology: 1. Random access; 2. Fast forward/fast backward search; 3. Reverse playback; 4. Audio-visual synchronization; 5. Fault tolerance; 6. Encoding/decoding delay. MPEG- 1 video compression strategy: In order to improve the compression ratio, intra/inter image data compression technology must be used at the same time. Intra-frame compression algorithm is almost the same as JPEG compression algorithm, and DCT-based transform coding technology is adopted to reduce redundant information in spatial domain. Inter-frame compression algorithm adopts prediction method and interpolation method. The prediction error can be further compressed by DCT transform coding. Inter-frame coding technology can reduce redundant information in the time axis direction.
MPEG-2 is called "2 1 century TV standard". It has made many important extensions and improvements on the basis of MPEG- 1, but its basic algorithm is the same as MPEG- 1.
MPEG-4
MPEG-4 standard is not a substitute for MPEG-2, but focuses on different application fields. The original intention of MPEG-4 is mainly aimed at the demand of ultra-low bit rate compression (less than 64Kb/s) for video conferencing and videophone. In the process of formulation, MPEG organization deeply felt that people's demand for media information, especially video information, has changed from playing to content-based access, retrieval and operation.
MPEG-4 is quite different from JPEG and MPEG- 1/2 mentioned above. It provides a broader platform for multimedia data compression and coding. It defines a format and a framework, not a specific algorithm. It hopes to establish a freer environment for communication and development. Therefore, the new goal of MPEG-4 is to support various multimedia applications, especially the content-based retrieval and access of multimedia information, and to configure decoders on site according to different application requirements. The coding system is also open, and new and effective algorithm modules can be added at any time. Applications include real-time audio-visual communication, multimedia communication, remote monitoring/surveillance, video on demand, home shopping/entertainment, etc.
JVT: A New Generation Video Compression Standard
JVT is a joint video group established by ISO/IEC MPEG and ITU-T VCEG, which is dedicated to formulating a new generation of digital video compression standards.
The official name of JVT standard in ISO/IEC is: MPEG-4 AVC(part 10) standard; Name in ITU-T: H.264 (formerly H.26L)
H264/AVC
H264 integrates the advantages of previous standards and absorbs the experience accumulated in the formulation of previous standards. The design is simple, and it is easier to popularize than MPEG4-4. H.264 pioneered new compression technologies such as multi-reference frames, multi-block types, integer transformation and intra prediction, and used finer sub-pixel motion vectors (1/4, 1/8) and a new generation of loop filters, which greatly improved the compression performance and improved the system.
H.264 has the following advantages:
-Efficient compression: compared with H.263+ and MPEG-4SP, the code rate is reduced by 50%.
-Good flexibility in time delay constraints.
-Fault tolerance
-Complexity and scalability of encoding/decoding
-Decode all details: no mismatch.
-High quality applications
-Network friendly
Video coding technology in monitoring
At present, several video coding technologies such as MJPEG, MPEG 1/2, MPEG-4 (SP/ASP) and H.264/AVC are mainly used in monitoring. For the end user, his main concerns are: clarity, storage capacity (bandwidth), stability and price. Using different compression technologies will greatly affect the above factors.
MJPEG
MJPEG (moving JPEG) compression technology is mainly based on static video compression. Its main feature is that it basically does not consider the changes between different frames in the video stream, and only compresses a certain frame.
MJPEG compression technology can obtain high-definition video images and dynamically adjust the frame rate and resolution. However, because the inter-frame change is not considered, a large amount of redundant information is repeatedly stored, so the space occupied by a single frame video is large. At present, the popular MJPEG technology can only achieve up to 3 kilobytes per frame, usually 8 ~ 20 kilobytes!
MPEG- 1/2
The MPEG- 1 standard mainly aims at the resolution of SIF standard (NTSC standard is 352X240;; PAL system is 352X288). The main target of compression bit rate is1.5mb/s/s. Compared with MJPEG technology, MPEG 1 has a significant improvement in real-time compression, data volume per frame and processing speed. However, MPEG 1 also has many shortcomings: the storage capacity is still too large, the definition is not high enough, and the network transmission is difficult.
MPEG-2 has been expanded and upgraded on the basis of MPEG- 1, and the backward compatibility of MPEG- 1 is mainly used in storage media, digital TV, high definition and other application fields. The resolutions are: low resolution (352x288), medium resolution (720x480) and the second highest resolution (1440x65438). Compared with MPEG- 1, MPEG-2 video improves the resolution and meets the user's requirements for high definition. However, due to the little improvement of compression performance, the storage capacity is still too large, which is not suitable for network transmission.
MPEG-4
Compared with MPEG- 1/2, MPEG-4 video compression algorithm has a significant improvement in low bit rate compression. In the case of CIF(352*288) or higher definition (768*576), video compression has greater advantages than MPEG- 1 in definition and storage capacity, and is more suitable for network transmission. In addition, MPEG-4 can dynamically adjust the frame rate and bit rate conveniently to reduce the storage capacity.
Because the system design of MPEG-4 is too complicated, it is difficult for MPEG-4 to be fully compatible, and it is also difficult to be realized in video conferencing, videophone and other fields, which deviates from the original intention. In addition, enterprises in China have to face the problem of high patent fees. Currently, it is stipulated that:
-Each decoding device needs to pay MPEG-LA 0.25.
-codec equipment also needs to be paid by time (4 cents/day = 1.2 USD/month = 14.4 USD/year).
H.264/AVC
H.264 concentrates the advantages of previous standards, and has made breakthrough progress in many fields, which makes it obtain much better overall performance than previous standards:
-Compared with H.263+ and MPEG-4 SP, it can save up to 50% code rate and greatly reduce the storage capacity;
H.264 can provide high video quality at different resolutions and different bit rates.
-Adopt a "network-friendly" structure and grammar to make it more conducive to network transmission.
With simple design, H.264 is easier to popularize than MPEG-4, easier to implement in video conference and videophone, easier to realize interconnection, and can be easily combined with low bit rate speech compression such as G.729 to form a complete system.
MPEG LA has absorbed the high patent fee of MPEG-4, so it is difficult to popularize it. MPEG LA has formulated the following low-cost charging standards for H.264: H.264 basically does not charge when playing; When the H.264 codec is embedded in the product, the annual output is less than 654.38+10,000 units, 0.2 USD for more than 654.38+10,000 units and 0. 1 USD for more than 5 million units. Low patent fees make it easier for China H.264 monitoring products to go global.
Selection of video coding resolution in monitoring
At present, the monitoring industry mainly uses the following resolutions: SQCIF, QCIF, CIF and 4CIF.
The advantages of SQCIF and QCIF are low storage capacity, which can be used in narrow band. The products with this resolution are cheap. The disadvantage is that the image quality is usually poor and unacceptable to users.
CIF is the mainstream solution in the monitoring industry at present. Its advantages are low storage capacity, transmission in ordinary broadband network, relatively low price and good image quality, which are accepted by most users. The disadvantage is that the picture quality can't meet the requirements of high definition.
4CIF is standard definition resolution, which has the advantage of clear image. Disadvantages are large storage capacity, high network transmission bandwidth requirements and high price.
New Choice of Resolution-528x384
2CIF(704x288) has been adopted by some products to solve the shortcomings of 4CIF, such as low definition, large storage capacity and high price. However, because 704x288 only improves the horizontal resolution, the improvement of image quality is not particularly obvious.
After testing, we found that the resolution of another 2CIF is 528x384, which can solve the problems of CIF and 4CIF better than 704x288. Especially at the bit rate of 5 12kbps- 1mbps, stable high-quality images can be obtained, which meets the user's requirements for higher image quality. At present, this solution has been adopted by many network multimedia broadcasts and accepted by the majority of users. For example, Hangzhou Netcom Online Cinema adopts the resolution of 5 12x384, which can stably obtain DVD-like image quality at 768k.
The best way to realize video coding in monitoring
At present, video coding is in a period of rapid technological change, and the compression performance of video coding is also constantly improving.
ASCI and DSP are mainly used for monitoring. Because the design and production cycle of ASIC chip is too long, it can't keep up with the development speed of video coding. The DSP chip, because of its universal design, can realize various video coding algorithms, update the video encoder in time, and keep up with the development speed of video coding. In addition, using DSP chip can configure the encoder more flexibly than ASIC, so that the encoder can achieve the best performance.
The current technical level of Hikvision products.
Hikvision products adopt the most advanced H.264 video compression algorithm and high-performance DSP processor.
The powerful H.264 video compression engine enables the product to obtain extremely high compression ratio, high-quality image quality and good network transmission performance. High-performance DSP processor can flexibly configure video codec: dynamically set resolution, frame rate, code rate, image quality and so on. It can be output in double code streams, realizing the functions of local storage and network transmission.
Using TM 130X DSP products, a single chip can compress one channel of video in real time with the following resolutions: SQCIF, QCIF, CIF, 2CIF(PAL:704x288 or 528x384).
Using the products of DM642 DSP, a single chip can compress the video with resolution lower than four channels: SQCIF, QCIF, CIF and 2CIF(PAL:704x288 or 528x384) in real time. A single chip can compress 2 channels of 4CIF video in real time.
The digital networking of TV program production has become a hot spot that everyone cares about, and one of the important technologies is digital video compression. Moving Picture Expert Group (MPEG) is a working group of ISO/IEC, which is responsible for formulating international standards for the compression, decompression, processing and coding of moving pictures, audio and their mixed information. MPEG has formulated MPEG- 1, MPEG-2 and MPEG-4 standards. MPEG- 1 and MPEG-2 have been widely used in multimedia industries, such as digital TV, CD, video on demand, archiving, online music and so on. MPEG-4 is mainly used for low-rate audio and video coding below 64 kb/s, narrow-band multimedia communication and other fields. MPEG is currently formulating MPEG-7 and MPEG-2 1. However, M-JPEG, MPEG-2 and DV have occupied the main position in today's video compression technology, showing irreplaceable, fierce competition and common development.
Both M-JPEG and DV use intra-frame compression, and the compression efficiency is lower than that of MPEG-2. At low bit rate, MPEG-2 can provide higher compression ratio than M-JPEG and keep better image quality. When high image quality is required (such as program editing and post-production), the difference between MPEG-2 and M-JPEG and DV is much smaller. The diversity of TV services requires that compression standards can provide multiple bit rates. Variable bit rate (VBR) is very important for TV stations to use resources effectively. MPEG-2 can adjust the output bit rate by changing the GOP structure and the parameters of DCT and huffman coding. M-JPEG can adjust the compression ratio by changing DCT and huffman coding parameters; DV format does not provide VBR because of its application characteristics. M-JPEG was developed earlier and has been used in nonlinear video editing for many years. Mature software and hardware technology, low cost. As far as the current hardware platform is concerned, it is about $5,000 cheaper than MPEG-2 platform on average. At present, M-JPEG, DV and MPEG-2 have their own advantages, and their devices have been widely used. Most of Japan and North America use DV format for post-production; In the D84 and D85 technical statements of 1999, EBU recommended that TV stations use 50 Mb/s pure I frame 4: 2: 2p MPEG-2 in the studio. In China, M-JPEG is widely used, and the editing of MPEG-2 IBP format is hotly discussed.
The following is a comparison of two video compression technologies, namely M-JPEG and MPEG-2, which are mainly used in digital networks of TV stations. Finally, MPEG-7 is briefly summarized.
M-JPEG is called JPEG compression optimized for moving images. JPEG compresses image data according to DCT transform of a frame image, and JPEG compresses every frame (4:2:2 data) of TV digital signal. Because TV editing and stunt production need to take frames as the basic unit, the M-JPEG format of frame-based compression (intra-frame compression) has been successfully applied to digital video systems, especially to digital nonlinear program editing systems. At present, most nonlinear editing systems in China use 4: 1 M-JPEG compression, which is considered as an acceptable playback level. PAL 4:2:2 digital signal is compressed by 4: 1, its data rate is 5 MB/s(40M b/s), and video programs occupy 18 GB storage space every hour. Because M-JPEG is an intra-frame compression method, it can provide random access with frame accuracy without any access delay, and can realize program editing with frame accuracy. The so-called MPEG-2 compression is based on the principle that there is a certain similarity between adjacent frames of moving images. Through motion prediction, referring to the similarity between the previous frame and this frame, redundant data similar to the previous frame is removed and only data different from the previous frame is recorded, which greatly improves the compression efficiency of video data. This compression method is also called frame correlation compression. Because the frame correlation compression method of motion prediction is adopted, it has a good effect on video compression. On the premise of obtaining broadcast digital video quality, the compression efficiency can reach 20: 1, the data rate can be reduced to 1 MB/s(8M b/s), and the space occupied by one-hour video program is 3.6 GB. The utilization rate of data storage space is high, and the network transmission efficiency is more than 5 times that of M-JPEG system. This brings great benefits to the storage, transmission, editing and playing of compressed video based on MPEG-2, which can greatly save the storage cost, and can introduce various types of storage media, such as hard disk, optical disk, data tape, memory chip and so on.
However, because MPEG-2 format has only one complete frame, I frame, it will bring some difficulties when TV needs accurate frame splicing, and it needs the support of hardware board or software system. MPEG-2 has two compression modes: intra-frame compression and inter-frame compression, and uses three types of images, namely I-frame, P-frame and B-frame. I frame uses intra-frame compression without motion compensation, providing moderate compression ratio. Because I frame does not depend on other frames, it is the entry point of random access and the reference frame of decoding. P-frame is predicted according to previous I-frame or P-frame, and compressed by motion compensation algorithm, and the compression ratio is higher than I-frame. P frame is the reference frame for decoding B frame and subsequent P frame. It has its own errors, which will cause error propagation. B frame is a frame based on interpolation reconstruction, which is based on two IP frames or PP frames before and after, and does not propagate errors. It uses bidirectional prediction for compression, thus providing a higher compression ratio. At present, various hardware board manufacturers are trying to solve the problem of IBP frame editing based on MPEG-2. At present, many companies in China, such as Aoweixun, Suo Beier and Dayang, have solved the problem of accurate editing of IBP frames with software, which makes it possible to apply MPEG-2 format to the production, transmission, storage and broadcasting of TV programs, and to build digital network systems of all TV stations.
1996 10 in June, the moving picture expert group began to solve the problem of multimedia content description, that is, multimedia content description interface (MPEG-7 for short). MPEG-7 will extend the existing capacity limit of identification content and will include more data types. The goal of MPEG-7 is to support a variety of audio and video descriptions, including free text, n-dimensional spatio-temporal structure, statistical information, objective attributes, subjective attributes, production attributes and combined information. For visual information, description will include color, visual object, texture, sketch, shape, volume, spatial relationship, movement and deformation.
The goal of MPEG-7 is to provide a method to describe multimedia materials according to the abstract level of information, so as to express the information needs of users at different levels. Taking visual content as an example, the lower abstraction layer will include the description of shape, size, texture, color, motion (trajectory) and position. The lower abstract layers of audio include tone, mode, speed of sound, change of speed of sound and spatial position of sound. The goal of MPEG-7 is to support the flexibility of data management, the globalization and interoperability of data resources.
For future multimedia services, content representation and description must be considered together, that is, many services involving content representation must first deal with content description. By using MPEG-7 to describe available audio-visual information, we can quickly find the information we want, interact with multimedia content more freely, reuse the content of audio-visual information, or combine some components of these contents in new ways.
Coding and decoding technology has been continuously improved in the past ten years. The latest coding and decoding technologies (H.264/AVC and VC- 1) represent the third generation video compression technology. Choosing the right codec for a specific application and optimizing its real-time implementation is still a huge challenge. The best design must balance compression efficiency and available computing power. ……
Video compression is an important driving force for all exciting new video products. Choosing the right codec for a specific application and optimizing its real-time processing is still a huge challenge. The best design must balance compression efficiency and available computing power. In addition, how to get the best compression efficiency with limited computing power is also a university question.
The main challenge of digital video is that the original or uncompressed video needs to store or transmit a large amount of data. For example, the digitization of standard-definition NTSC video is generally 30 frames per second, using 4:2:2 YCrCb and 720×480, and the data rate is required to exceed 165Mbps. Saving a 90-minute video requires 1 10GB, which is equivalent to more than 25 times the storage capacity of a standard DVD-R. Even low-resolution videos commonly used in video streaming applications (such as CIF: 352× 288 4: 2: 0, 30 frames per second) need a data rate of more than 36.5Mbps, which is ADSL or 3G wireless. At present, broadband network can provide continuous transmission capacity of 1 ~ 10 Mbps. Obviously, the storage or transmission of digital video requires compression technology.
The purpose of video compression is to encode digital video-while maintaining video quality, it takes up as little space as possible. The theoretical basis of coding and decoding technology is the mathematical principle of information theory. However, the development of practical coding and decoding technology needs artistic deliberation.
Compression tradeoff
Many factors need to be considered when choosing the coding and decoding technology of digital video system. The main factors include the video quality requirements of the application, the environment (speed, delay, error characteristics) in which the transmission channel or storage medium is located, and the format of the source content. Equally important are the expected resolution, target bit rate, color depth, frames per second, and whether the content and display are progressive or interlaced. Compression usually requires a trade-off between the video quality requirements of the application and other requirements. Is it first used for storage or unicast, multicast, two-way communication or broadcast? For storage applications, how much storage capacity is available and how long will it take? What is the highest bit rate for applications other than storage? What is the delay tolerance or allowable end-to-end system delay of two-way video communication? If it is not a two-way communication, does the content need to be encoded offline or in real time in advance? How fault-tolerant is the network or storage medium? Depending on the basic target application, different compression standards handle these tradeoffs in different ways.
On the other hand, the cost of real-time codec processing needs to be weighed. New algorithms that can achieve higher compression ratio, such as H.264/AVC or WMV9/VC- 1, need higher processing power, which will affect the cost of codec equipment, system power consumption and system memory.
……
Standards are very important for the popularization of coding and decoding technology. Due to economies of scale, users look for corresponding products according to affordable standards. The industry is willing to invest in standards because it can ensure interoperability between manufacturers. Because their own content can get a long life cycle and a wide range of needs, content providers also favor standards. Although almost all video standards are aimed at a few specific applications, if they can be applied, they can also play an advantage in other applications.
In order to achieve better compression and gain new market opportunities, ITU and MPEG have been developing compression technologies and new standards. China has recently formulated a national video coding standard called AVS, which we will also introduce later. At present, the standards being formulated include ITU/MPEG joint scalable video coding (revised version of H264/ AVC) and MPEG multi-view video coding. In addition, in order to meet the new application requirements, the existing standards are constantly developing. For example, H.264 recently defined a new mode called fidelity range extension to meet new market demands, such as professional digital editing, HD-DVD and lossless coding.
Terminal devices using digital video compression technology range from battery-driven portable devices to high-performance basic devices.
The best processor solution for digital video depends on the specific target application. TI has a variety of DSP, which can support a variety of standards and meet the main design and system limitations. TI has a wide range of solutions, including low-power C5000 DSP and mobile OMAP application processor, high-performance C6000 DSP and video-optimized high-performance DM64x and DM644x digital media processors.
The DM column processor of Texas Instruments (TI) is specially designed for the requirements of high-end video systems. The latest processor in this series is the powerful DM6446[ 15], which adopts TI's Da Vinci technology [16]. The dual-core architecture of DM6446 has the advantages of both DSP and RISC technology, and integrates the c64x+ DSP core with a clock frequency of 594MHz and the ARM926EJ-S core. The new generation of c64x+ DSP is the fixed DSP with the highest performance in TMS320C6000(tm) DSP platform, which is based on the enhanced version of the second generation of high-performance advanced VLIW architecture developed by TI. C64x+ is compatible with the previous generation C6000 DSP platform code. Programmable digital media processors such as DM644x can support all existing industry standards and proprietary video formats with one programmable digital media processor. The DM6446 also has on-chip memory, including a second-level cache and many peripherals with video-specific functions. DM6446 also includes a video/video coprocessor (VICP), which is used to reduce the heavy video and video processing burden of related algorithms (such as JPEG, H.264, MPEG4-4 and VC- 1), so that more DSP MIPS can be used for video post-processing or other parallel functions.
The compression standard specifies the required syntax and available tools, but many algorithm results depend on the specific implementation. The main variables include: rate control algorithm, single-channel and multi-channel coding, I/B/P frame ratio, motion search range, motion search algorithm, individual tools and modes selected. This flexibility allows us to make different trade-offs between computing load and improving quality. Obviously, all encoders can achieve different video quality levels through high or low frequencies.
More and more video compression standards can provide higher and higher compression efficiency and richer tools for specific terminal applications. In addition, the trend of networking means that many products increasingly need to support multiple standards. The popularity of multiple standards and proprietary algorithms also makes it difficult for us to choose a single standard, especially when hardware decisions often precede product deployment. Moreover, each video coding algorithm provides abundant tools and functions to balance the complexity of compression efficiency. The selection of tools and functions is an iterative process closely related to specific applications and use cases. Due to the increase in the number of codecs that must be supported and the expansion of the choice of optimized codecs for specific solutions and applications, it has become a general trend to adopt flexible media processors in digital video systems. Digital media processors, such as DM6446, can fully meet the performance processing requirements and have a flexible architecture, so they can quickly bring new standards to the market, including H.264, AVS and WMV9. We can implement the algorithm in the standard definition stage and keep the software algorithms and tools updated, so as to track the revision of the standard and meet the ever-changing quality requirements of the application.
Edit entry
/Other/2004-10-15/1438441503.shtml