To find the history of memory development, I want the detailed process from SDRAM to DDR2.

Memory development history

Wikipedia, the free encyclopedia.

Before understanding the development of memory, we should explain some common words, which will help us to strengthen our understanding of memory. RAM is short for random access memory. It is divided into static RAM and dynamic RAM.

SRAM used to be a kind of main memory, which is very fast and can save data without refreshing. It stores data in the form of bistable circuit, and its structure is complex. It needs to use more transistors to form registers to store data, so the silicon chip it uses is quite large and the manufacturing cost is quite high. So now SRAM can only be used in a much smaller cache than main memory. With Intel integrating L2 cache into CPU (starting from Medocino), SRAM has lost its biggest source of application requirements. Fortunately, in the development trend of mobile phones from analog to digital, it finally found another opportunity for SRAM with power saving advantages. The demand for network servers and routers encouraged the SRAM market to continue to grow reluctantly.

DRAM, as its name implies, is dynamic RAM. The structure of DRAM is much simpler than that of SRAM. The basic structure consists of MOS tube and capacitor. It has the advantages of simple structure, high integration, low power consumption and low production cost, and is suitable for manufacturing large-capacity memory, so most of the memories we use now are DRAM. So the following mainly introduces DRAM memory. Before describing DRAM memory in detail, let's talk about the concept of synchronization, which can be divided into two types according to the access methods of memory: synchronous memory and asynchronous memory. The criterion to distinguish them is whether they can be synchronized with the system clock. A memory control circuit (in a motherboard chipset, usually in a northbridge chipset) issues a row address selection signal (RAS) and a column address selection signal (CAS) to specify which memory bank will be accessed. EDO memory before SDRAM adopts this method. The time required to read data is expressed in nanoseconds. When the speed of the system increases gradually, especially when the frequency of 66MHz becomes the bus standard, the speed of EDO memory becomes very slow, and the CPU always has to wait for the data in the memory, which seriously affects the performance, and the memory becomes a big bottleneck. Therefore, SDRAM that synchronizes the system clock frequency appears.

Classification of DRAM FP DRAM: Also called fast page memory, it was very popular in the 386 era. Because DRAM needs constant current to store information, once the power is cut off, the information will be lost. Its refresh frequency can reach hundreds of times per second, but FP DRAM uses the same circuit to access data, so the access time of DRAM has a certain time interval, which leads to its slow access speed. In addition, in DRAM, because the storage address space is arranged by pages, when accessing a certain page, switching to another page will take up extra clock cycles of CPU. Its interfaces are mostly 72-wire SIMM type. Edo DRAM:Edo RAM- extended data output RAM- external extended data mode memory. EDO-RAM, like FP DRAM, cancels the time interval of two storage cycles of extended data output memory and transmission memory, and accesses the next page while sending data to CPU, so its speed is 15~30% faster than ordinary DRAM. The working voltage is generally 5V, and its interface mode is mostly 72-wire SIMM type, but there are also 168-wire DIMM types. EDO DRAM, a kind of memory, is popular in 486 and early Pentium computers. The current standard is SDRAM (the abbreviation of synchronous DRAM), which, as its name implies, is synchronized with the system clock frequency. SDRAM memory access adopts burst mode, and its principle is that SDRAM adds synchronization control logic (a state machine) to the existing standard dynamic memory, and uses a single system clock to synchronize all address data and control signals. Using SDRAM can not only improve the system performance, but also simplify the design and provide high-speed data transmission. Functionally, it is similar to the traditional DRAM, and it also needs a clock to refresh. It can be said that SDRAM is an enhanced DRAM with improved structure. However, how can SDRAM use its synchronization characteristics to meet the needs of high-speed systems? As we all know, all dynamic memory technologies we use are based on asynchronous control. When using these asynchronous dynamic memories, the system needs to insert some waiting states to meet the needs of asynchronous dynamic memories. At this time, the execution time of instructions is often determined by the speed of memory, not the highest speed that the system itself can achieve. For example, when storing continuous data in the cache, a fast page memory with a speed of 60ns needs a page cycle time of 40ns; When the system speed is running at 100MHz (one clock cycle is 10ns), you need to wait for 4 clock cycles for each data access! Using SDRAM can avoid this time because of its synchronous characteristics. Another major feature of SDRAM structure is that it supports opening two columns of DRAM addresses at the same time. Memory access between two open banks can be cross-processed. Generally, preset or active columns can be hidden during bank access, that is, a bank can be preset while reading or writing. Accordingly, the seamless data rate of 100MHz can be achieved when the whole device is reading or writing. Because the speed of SDRAM limits the clock speed of the system, its speed is calculated in MHz or ns. The speed of SDRAM must not be lower than the system clock speed at least. SDRAM access usually occurs in four consecutive burst periods. The first burst period needs four system clock cycles, and the second to fourth burst periods only need 1 system clock cycle. Expressed in numbers as follows: 4- 1- 1- 1. By the way, BEDO (bursting edo) is also called bursting edo memory. In fact, its principle and performance are similar to SDRAM, because Intel's chipset supports SDRAM, and because of Intel's market leading position, SDRAM has become the market standard.

DRAMR's two interface types DRAM mainly has two interface types, the early SIMM and the current standard DIMM. SIMM is the abbreviation of single-in-line memory module, that is, single-side contact memory module, which is a common memory interface mode in 486 and its early PC. Early PC (before 486) mostly used 30-pin SIMM interface, Pentium mostly used 72-pin SIMM interface, or coexisted with DIMM interface types. SIMM is the abbreviation of Dual In-Line Memory Module, that is, bilateral contact memory module, which means that there are data interface contacts on both sides of the plug-in board of this kind of interface memory. This interface mode of memory is widely used in modern computers, usually 84 pins, but because it is bilateral, a * * * has 84×2= 168 line contact, so people often call this memory 65438. DRAM memory is usually 72 lines, EDO-RAM memory has both 72 lines and 168 lines, and SDRAM memory is usually 168 lines.

At the arrival of the new century, the new memory standard has also brought great changes to computer hardware. Computer manufacturing technology has developed to the edge of gigabit, which can improve the clock frequency of microprocessor (CPU). The corresponding memory must also keep up with the speed of the processor. Now there are two new standards, DDR SDRAM memory and Rambus memory. The competition between them will become the core of PC memory market competition. DDR SDRAM represents the gradual evolution of memory. Rambus represents a major change in computer design. From a further point of view. DDR SDRAM is an open standard. However, Rambus is a patent. The winner between them will have a great and far-reaching impact on the computer manufacturing industry.

The working frequency of RDRAM has been greatly improved, but this structural change involves comprehensive changes including chipset, DRAM manufacturing, packaging, testing and even PCB and module, which can be described as a whole. What is the development of high-speed DRAM structure in the future? Can Intel's reassembly and reissue of the 820 chipset really make RDRAM the mainstream as it wishes?

Pc133 SDRAM: PC133 SDRAM is basically just an extension of PC 100 SDRAM. No matter in DRAM manufacturing, packaging, modules and connectors, the old specifications are continued, and their production equipment is the same, so the production cost is similar to PC 100 SDRAM. Strictly speaking, the only difference between them is that under the same process technology, a "screening" procedure is added to select particles with a speed of 133MHz. If the front-end bus frequency of CPU is increased to 133MHz by cooperating with a chipset that can support the external frequency of 133MHz, the DRAM bandwidth can be increased to above 1GB/ sec, thus improving the overall system performance.

DDR-SDRAM: DDR SDRAM (Double Data Rate DRAM) or SDRAM II, because DDR can transmit data on the rising and falling edges of the clock, the actual bandwidth is tripled, and the cost performance is greatly improved. In terms of actual function comparison, the second-generation PC266 DDR SRAM( 133MHz clock× 2 times data transmission = =266MHz bandwidth) derived from PC 133 not only shows that its performance is 24.4% higher than Rambus on average in the latest research report, but also better than other high-bandwidth schemes in Meguiar's test.

Direct Rambus-DRAM: The design of Rambus-DRAM :Rambus DRAM different from the previous DRAM, and its microcontroller is different from the general memory controller, which makes the chipset have to be redesigned to meet the requirements. In addition, the data channel interface is also different from the general memory. Rambus transmits data in two data channels, each with 8 bits (9 bits with ECC). Although it is narrower than SDRAM's 64bit, its clock frequency can be as high as 400MHz, and it can transmit data on the rising and falling edges of the clock, so it can reach the peak bandwidth of 1.6 GB/ sec.

Comprehensive comparison of data bandwidth of various DRAM specifications: In terms of data bandwidth, the peak data transmission rate of the traditional PC 100 can reach 800 MB/ s when the clock frequency is 100MHz. If DRAM is made of advanced 0.25 micron thread, most of PC 133 particles with clock frequency of 133MHz can be screened, and the peak data transmission rate can be increased to 1.06 GB/ s again. As long as the CPU and chipset can cooperate, the overall system performance can be improved. In addition, as far as DDR is concerned, because it can transmit data at both the rising edge and the falling edge of the clock, at the same clock frequency 133MHz, its peak data transmission will be greatly tripled, reaching the level of 2. 1gb/ sec, and its performance is even higher than that of Rambus at present.

Transmission mode: traditional SDRAM adopts parallel data transmission mode, while Rambus adopts special serial transmission mode. In the serial transmission mode, all data signals come in and out, which can reduce the data bandwidth to 16bit and greatly improve the working clock frequency (400MHz), but it also forms the limitations in the design of module data transmission. That is to say, in serial mode, if one of the modules is damaged or an open circuit is formed, the whole system will not start normally. Therefore, for the motherboard with Rambus memory, three sets of memory expansion slots must be completely filled. If Rambus module is insufficient, only relay module (continuous RIMM module; ; C-RIMM) is purely used to provide serial connection of signals and smooth data transmission.

Module and PCB design: Rambus is very different from SDRAM in circuit design, circuit layout, particle packaging, memory module design and so on because its working frequency is as high as 400MHz. As far as module design is concerned, the memory module composed of RDRAM is called RIMM (Rambus in Memory Module). The current design can be composed of different numbers of RDRAM particles, such as 4, 6, 8, 12, 16. Although the number of pins is increased to 184, the length of the whole module is equivalent to the length of the original DIMM.

In addition, in design, each transmission channel of Rambus can carry a limited number of chip particles (up to 32), which will limit the capacity of RDRAM storage module. That is to say, if a RIMM module of 16 RDARM particle has been installed, only one module of 16 RDARM can be installed at most to expand the memory. In addition, because RDARM works at high frequency, it will produce high temperature, so RIMM module must be designed with a layer of heat sink, which also increases the cost of RIMM module.

Particle packaging: DRAM packaging technology has been improved from the earliest DIP and SOJ to TSOP. Judging from the mainstream SDRAM modules, except TinyBGA technology pioneered by Shengchuang Technology and BLP packaging mode pioneered by Qiao Feng Technology, most of them still adopt TSOP packaging technology.

With the introduction of DDR and RDRAM, the memory frequency has increased to a higher level, and TSOP packaging technology can not meet the requirements of DRAM design gradually. Judging from the RDRAM pushed by Intel, a new generation of μBGA package is adopted, and it is believed that other high-speed DRAM packages such as DDR will adopt the same or different BGA package methods in the future.

Although RDRAM has made a breakthrough in clock frequency and effectively improved the performance of the whole system, its specifications are far from the current mainstream SDRAM, which is not only incompatible with the existing system chipset, but also monopolized by Intel. Even in the design of DRAM module, not only the latest BGA packaging method is adopted, but also the strict standard of 8-layer board is adopted in the design of circuit board, not to mention the huge investment of test equipment. Most DRAM and module manufacturers are afraid to follow up rashly.

Moreover, because Rambus is a patent standard, manufacturers who want to produce RDRAM must first obtain Rambus certification and pay a high patent fee. Not only does it increase the cost burden of DRAM manufacturers, but they are also worried that they will lose their original specification control ability when formulating the next generation memory standard.

Because RIMM module can only have 32 particles at most, Rambus application is limited and can only be used on entry-level servers and advanced PCs. Perhaps PC 133 can't compete with Rambus in performance, but once DDR technology is integrated, its data bandwidth can reach 2. 1 GB/ sec, which is not only ahead of Rambus's 1.6 GB/ sec standard, but also because of its open standard and much higher compatibility than Rambus, it is estimated that it will cause great harm to Rambus. What's more, with the strong support of Taiwan Province Province's alliance with VIA and AMD, it is not clear whether Intel can give orders as usual. Rambus will have a small market, at least in terms of low-cost PCs and network PCs.

Conclusion: Although Intel has adopted various strategic layouts and countermeasures to restore Rambus's momentum, Rambus, a breakthrough product, has many insurmountable internal problems. Perhaps Intel can solve the technical problems by changing the RIMM slot mode of the motherboard, or proposing a transition scheme (S- RIMM, RIMM Riser) in which SDRAM and RDRAM*** coexist. But when it comes to controlling mass production costs, it will not be monopolized by Intel. Moreover, under the network trend, computer applications will become cheaper and cheaper, and whether the market demanders are interested in Rambus remains to be tested. On the supply side, judging from the original VCM SDRAM specification of NEC, the conservative attitude of DRAM manufacturers such as Samsung to Rambus, and the insufficient investment in related packaging and testing equipment, it is estimated that Rambus memory chips still lack the price competitiveness with PC 133 or even DDR before the end of the year.

In the long run, Rambus architecture may become the mainstream, but it should no longer be the absolute mainstream that dominates the market, and SDRAM architecture (PC 133, DDR) should have a very good performance in terms of low cost and wide application fields. It is believed that the future DRAM market will be a situation in which multiple structures coexist.

According to the latest news, Rambus DRAM, which is expected to become the main force of the next generation of memory, is slightly depressed due to the delay in launching the chipset. In view of the standardization of DDR SDRAM, many semiconductor and computer manufacturers around the world have formed the AMII (Advanced Memory International Inc.) camp. Then it was decided to actively promote the standardization of PC 1600 and PC2 100 DDR SDRAM, which was 10 times faster than that of PC200 and PC266, which made the dispute between Rambus DRAM and DDR SDRAM enter a new situation. AMD, the second largest microprocessor manufacturer in the world, decided that its Athlon processor will adopt DDR SDRAM with PC266 specification, and decided to develop a chipset supporting DDR SDRAM before the middle of this year, which greatly encouraged the DDR SDRAM camp. The global memory industry is likely to shift the focus of future investment from Rambus DRAM to DDR SDRAM.

To sum up, the development momentum of DDR SDRAM this year is higher than RAMBUS. Moreover, the production cost of DDR SDRAM is only 1.3 times that of SDRAM, which is more advantageous in production cost.