Text Translation of Computer English (3rd Edition)

Computer English Third Edition Text Translation _IT/ Computer _ Professional Materials. Computer English (third edition) edited by Liu Yi and Wang Chunsheng is a computer English textbook for the 2 1 century, which involves basic computer knowledge, system structure, software engineering, application development, network communication, e-commerce and other information technologies that have a profound impact on our lives. This book is based on the latest English articles and classic original textbooks in the field of computer and IT, with corresponding notes and exercises, so that readers can quickly master the general characteristics of computer English and a large number of professional vocabulary, and improve their ability to read and retrieve computer original documents.

Unit 1: computer and computer science text a: computer overview 1. A computer is an electronic device that can receive a set of instructions or programs and then execute the programs by manipulating digital data or processing other forms of information. Without the development of computers, the modern high-tech world would be impossible. The whole society uses different types and sizes of computers to store and process all kinds of data, from confidential government documents and bank transactions to private family account. Computers have opened up a new era of manufacturing industry through automation technology, and also enhanced the performance of modern communication systems. In almost every field of research and application technology, from building a model of the universe to making tomorrow's weather forecast, computers are necessary tools, and their application itself has opened up a new field of speculation. Database services and computer networks make various information sources available. The same advanced technology also makes it possible to infringe on personal privacy and business secrets. As a part of the cost of modern technology, computer crime has become one of many risks. 2. The first adding machine in history was designed by Blaise Pascal, a French scientist, mathematician and philosopher, in 1642. It is the pioneer of digital computer. This device uses a series of wheels with 10 teeth, and each tooth represents a number from 0 to 9. These wheels are interconnected, so by turning the wheels forward by the correct number of teeth, these numbers can be added to each other. In 1970s, German philosopher and mathematician Gottfried Wilhelm Leibniz improved this machine and designed a machine that can also do multiplication. French inventor Joseph-Marie jacquard used perforated sheets to control the weaving of complex patterns when designing automatic looms. During the period of 65438+1980s, American statistician herman hollerith put forward the idea of processing data with punched cards like Jaka boards. By using a system to remove punched cards from electrical contacts, he was able to compile statistics for 1890 US Census. 1, resolver is also in the19th century. The British mathematician and inventor Charles babic put forward the principle of modern digital computer. He conceived many machines, such as differential expander, to deal with complex mathematical problems. Many historians believe that babic and his partner, mathematician augusta ada byron, were the real pioneers of modern digital computers. One of babic's designs, the analyzer, has many characteristics of modern computers. It has a set of input streams in the form of punched cards, a "warehouse" for storing data, a "factory" for arithmetic operation and a printer for generating permanent records. Babic failed to put this idea into practice, although it might be technically feasible at that time. 2. The early computer simulation computer was manufactured at the end of 19. The early model was calculated by rotating shaft and gear. The approximate value of the equation that is difficult to calculate by any other method can be obtained by such a machine. Lord Kelvin made a mechanical tidal predictor, which is actually a special analog computer. During World War I and World War II, mechanical analog computing systems, and later electronic analog computing systems, were used as torpedo course predictors on submarines and bombing sight controllers on airplanes. Another system is designed to predict spring floods in the Mississippi River basin. 3. Electronic Computers During the Second World War, a group of scientists and mathematicians built one of the earliest all-electronic digital computers: Giant, based in Blackley Park in the north of London. By194365438+February, this "giant" containing 1500 vacuum tubes began to operate. It was used by a group led by alan turing to decipher German wireless telegrams encrypted with Enigma code, and most of their attempts were successful. In addition, in the United States, john atanasoff and Clifford Berry built an electronic prototype at Iowa State College as early as 1939. This prototype and subsequent research work were completed quietly, and were later dwarfed by the development of 1945 electronic digital integration computer (ENIAC). ENIAC was granted a patent. However, several decades later, in 1973, this patent was abolished, because it was revealed that this machine absorbed the principle first used in atanasoff-Berry computer. Figure 1A- 1: E NIAC is one of the earliest all-electronic digital computers. ENIAC (see figure 1A- 1) contains 18000 vacuum tubes, and has a multiplying speed of several hundred times per minute. However, its program was originally transmitted to the processor by wire and must be changed manually. According to the idea of Hungarian-American mathematician john von neumann, the machine that was made later had a program memory. Instructions are stored in "memory" like data, so that the computer can get rid of the speed limit of paper tape reader when executing, and the problem can be solved without rewiring the computer. At the end of 1950s, the application of transistors in computers marked the emergence of logic elements which are smaller, faster and more versatile than those in vacuum tube machines. Because transistors consume much less energy and have a much longer life, this development itself has led to the emergence of an improved machine called the second generation computer. Components become smaller, the spacing between components becomes smaller, and the manufacturing cost of the system becomes lower. 4. Integrated circuits In the late 1960s, integrated circuits (see figure 1A-2) were adopted, which made it possible to manufacture many transistors on a silicon substrate, and the transistors were connected by wires plated in appropriate positions. Integrated circuits lead to further reduction in price, size and failure rate. In the mid-1970s, with the adoption of large-scale integrated circuits and later very large-scale integrated circuits (microchips), thousands of interconnected transistors were etched on silicon substrates, so microprocessors became a reality. Figure 1 A-2: the integrated circuit is ready. Let's look back at the ability of modern computers to handle switches: computers in the 1970 s can generally handle 8 switches at a time. In other words, they can handle 8 binary digits or data bits per cycle. A set of 8 bits is called a byte; Each byte contains 256 possible on and off modes (or 0 and 1). Each pattern is equivalent to an instruction, a part of an instruction or a specific data type, such as a number, a character or a graphic symbol. For example, the pattern11010 may be binary data-in this case, it represents the decimal number 2 10, or it may be an instruction that tells the computer to store the data stored in its switching device in a memory chip somewhere. The development of processors capable of processing 16, 32 and 64-bit data at the same time has improved the speed of computers. All recognizable patterns that a computer can handle-a general list of operations-are called its instruction set. With the continuous development of modern digital computers, these two factors-the number of bits that can be processed at the same time and the size of the instruction set-are increasing. Third, regardless of the size of hardware, modern digital computers are similar in concept. However, according to the cost and performance, they can be divided into several categories: personal computer or microcomputer, a low-cost machine, usually only the size of a desktop (although "notebook computer" is small enough to be put in a briefcase and "handheld computer" can be put in a pocket); Workstation, a microcomputer with enhanced graphics and communication capabilities, makes it particularly useful for office work; Small computers are generally too expensive for personal use and their performance is suitable for industrial and commercial enterprises, schools or laboratories; Mainframe, a big and expensive machine, has the ability to meet the needs of large industrial and commercial enterprises, government departments and scientific research institutions (the largest and fastest of which is called supercomputer). Digital computer is not a single machine. Specifically, it is a system composed of five different elements: (1) CPU; (2) input devices; (3) storage equipment; (4) output equipment; And (5) a communication network called a bus, which connects all elements of the system and connects the system with the outside world. 4. Programming is a series of instructions that tell computer hardware how to process data. The program can be embedded in the hardware itself or exist independently in the form of software. In some specialized or "dedicated" computers, operating instructions are embedded in their circuits; Common examples are calculators, watches, car engines and microcomputers in microwave ovens. On the other hand, although general-purpose computers contain some built-in programs (in read-only memory) or instructions (in processor chips), they rely on external programs to perform useful tasks. Once a computer is programmed, it can only do what the software that controls it allows it to do at any given moment. Widely used software includes a series of different applications-instructions that tell computers how to perform various tasks. 5. A persistent trend of computer development in the future is miniaturization, that is, efforts are made to compress more and more circuit components into smaller and smaller chip space. Researchers also try to use superconductivity to improve the running speed of circuits. Superconductivity is a phenomenon that resistance decreases in some materials at ultra-low temperature. Another trend of computer development is to develop the "fifth generation" computer, that is, to develop a computer that can solve complex problems and its solution can be described as "creativity", and the ideal goal is real artificial intelligence. One road being actively explored is parallel processing computing, that is, using many chips to perform several different tasks at the same time. An important parallel processing method is neural network which simulates the structure of nervous system. Another continuous trend is the increase of computer networks. Computer networks now use a global data communication system consisting of satellites and cable links to connect computers all over the world. In addition, a lot of research work is devoted to exploring the possibility of "optical" computers-this kind of hardware deals with much faster light pulses instead of electric pulses. Unit 2: Computer Architecture Text A: Computer Hardware 1. Computer hardware is the equipment needed for computer operation, which consists of physically operable components. The functions of these components are usually divided into three categories: input, output and storage. These types of components are connected to microprocessors, especially the central processing unit of computers. A central processing unit is an electronic circuit that provides computing power and controls a computer through a line or circuit called a bus. On the other hand, software is a set of instructions used by computers to process data, such as word processing programs or electronic games. These programs are usually stored in the CPU by computer hardware and transmitted back and forth in the CPU. Software also controls how hardware is used: for example, how to retrieve information from storage devices. The interaction between input and output hardware is controlled by basic input/output system (BIOS) software. Although microprocessor is still regarded as hardware technically, some of its functions are also related to computer software. Microprocessor is usually called firmware because it has the characteristics of both hardware and software. Second, the input hardware input hardware consists of external devices that provide information and instructions to the computer, that is, components other than the central processing unit of the computer. Light pen is an input pen with a light-sensitive nib, which is used to write directly on the computer display screen, or to select information on the screen by pressing the clip on the light pen or touching the screen with the light pen. This pen contains a light sensor to identify the part of the screen through which the pen passes. The mouse is a pointing device designed for one-handed grasping. There is a detection device (usually a ball) at its bottom, and the user can control the movement of the pointer or cursor on the screen by moving the mouse on the plane. The cursor moves on the screen when the device slides across the plane. To select an item or command on the screen, the user clicks a button on the mouse. Joystick is a pointing device, which consists of a rod. It can move in many directions to manipulate the cursor or other graphic objects on the computer screen. A keyboard is a typewriter-like device that enables users to type text and commands in a computer. Some keyboards have special function keys or integrated pointing devices, such as trackballs or touch-sensitive areas, which allow users to move the cursor on the screen by finger movement. Optical scanners use optical sensing devices to convert images in the form of pictures or texts into electronic signals that can be processed by computers. For example, a photo can be scanned into a computer and then included in a text file created by the computer. The two most common scanners are flat-panel scanners and handheld scanners. The former is similar to the office copier, while the latter scans the image to be processed manually. A microphone is a device that converts sound into a signal that can be stored, processed and played back by a computer. A speech recognition module is a device that converts speech into information that can be recognized and processed by a computer. Modem stands for modem, which is a device that connects a computer with a telephone line and allows information to be sent to or received from another computer. Every computer that sends or receives information must be connected to a modem. The information sent by the computer is converted into audio signals by the modem, and then transmitted to the receiving modem through the telephone line, and the modem converts the signals into information that can be understood by the receiving computer. Third, the output hardware output hardware consists of external devices that transmit information from the computer's central processor to computer users. A video display or screen converts computer-generated information into visual information. There are generally two forms of displays: cathode ray tube display and liquid crystal display. A screen or monitor based on a cathode ray tube looks like a TV set. The information output from the central processor is displayed by electron beam. The process is that the electron beam scans the phosphor screen, and the phosphor screen emits light to produce an image. Compared with CRT-based video monitor, LCD-based screen displays visual information on a flatter and smaller screen. Liquid crystal displays are often used in notebook computers. The printer prints the text and images output by the computer on paper. Dot matrix printers use tiny metal wires to hit the ribbon, thus forming characters. Laser printers use light beams to draw images on a magnetic drum, and then tiny black particles called toner are sucked away by the magnetic drum. Toner is melted to paper to form an image. An inkjet printer ejects small ink droplets onto paper to form characters and images. 4. storage hardware storage hardware permanently stores information and programs for computer retrieval. The two main storage devices are disk drives and memory. There are several types of disk drives: hard disk, floppy disk, magneto-optical disk and optical disk. Hard disk drives store information in magnetic particles embedded in the disk. Hard disk drive is usually a fixed part of the computer, which can store a lot of information and retrieve it very quickly. Floppy disk drives also use magnetic particles to store information, but these particles are embedded in removable disks, which may be soft or hard. Floppy disks store less information than hard disks, and the speed of retrieving this information is much slower. Magneto-optical disk drives store information on removable disks that are sensitive to both laser and magnetic fields. They can usually store as much information as hard disks, but the retrieval speed is a little slower. A concave area is ablated on the surface of an optical disk made of a reflective material. A compact disc drive (CD-ROM) stores information here. The information stored in CD-ROM memory cannot be erased or overwritten by new information. They can store as much information as hard drives, but the information retrieval speed is slow. Memory refers to a computer chip that stores information for quick retrieval by the central processing unit. Random Access Memory (RAM) is used to store information and instructions for operating computer programs. Usually, the program is transferred from the storage area of the disk drive to the random access memory. Random access memory is also called volatile memory, because when the power of the computer is turned off, the information in the computer chip will be lost. Read-only memory (ROM) contains key information and software that must be permanently available for computer operation, such as the operating system that commands the computer to run from startup to shutdown. Read-only memory is called nonvolatile memory, because when the computer is powered off, the information in the memory chip will not be lost. Some devices have more than one purpose. For example, a floppy disk can also be used as an input device if it contains information to be used and processed by computer users. In addition, if the user wants to store the calculation results on it, they can also be used as output devices. 5. Hardware connection In order to run, hardware needs to be physically connected so that components can communicate and interact. Bus provides a universal interconnection system. It consists of a set of wires or circuits, which coordinate and move information among the internal components of the computer. The computer bus consists of two channels: one is used by the central processor to locate data, which is called the address bus; The other is used to send data to this address, which is called data bus. Bus can be described by two characteristics: the amount of information that can be processed at one time (called bus width) and the speed of transmitting these data. A serial connection is a wire or a set of wires used to transmit information from a central processor to external devices, such as mice, keyboards, modems, scanners and some types of printers. This connection can only transmit one piece of data at a time, so the speed is very slow. The advantage of using serial connection is that it can provide long-distance effective connection. Parallel connection uses multiple sets of wires to transmit several information blocks at the same time. Most scanners and printers use this connection. Parallel connection is much faster than serial connection, but it is only limited to the distance between the central processor and external devices less than 3 meters (10 feet). Unit 3: Computer Language and Programming Text A: Programming Language 1. Introduction In computer science, programming language is an artificial language used to write a series of instructions (computer programs) that can be run by computers. Similar to natural languages such as English, programming languages also have vocabulary, grammar and syntax. However, natural languages are not suitable for computer programming, because they will cause ambiguity, that is, their vocabulary and grammatical structure may have multiple interpretations. Languages used for computer programming must have simple logical structures, and their grammar, spelling and punctuation rules must be accurate. Programming languages differ greatly in complexity and generality. Some programming languages are written to deal with specific types of computing problems or are used in specific types of computer systems. For example, FORTRAN, COBOL and other programming languages are written to solve some common programming problems-FORTRAN is for scientific application, while COBOL is for commercial application. Although these languages are designed to deal with specific types of computer problems, they are highly portable, that is, they can be used to program many types of computers. Other languages, such as machine language, are written for specific types of computer systems, even specific computers, and are used in some research fields. The most commonly used programming languages are highly portable and can be used to effectively solve different types of computing problems. C, PASCAL, BASIC and other languages all belong to this category. Second, language types programming languages can be divided into low-level languages and high-level languages. Low-level programming language or machine language is the most basic kind of programming language, which can be directly understood by computers. The machine language varies according to the computer manufacturer and model. High-level language is a programming language, which must be translated into machine language before computers can understand and process it. C, C++, PASCAL and FORTRAN are all examples of high-level languages. Assembly language is an intermediate language, very close to machine language. It is not as complicated as other high-level languages, but it still needs to be translated into machine language. 1, machine language In machine language, instructions are written as a sequence of 1 and 0 (called bits) that computers can directly understand. A machine language instruction usually tells a computer four things: (1) where to find one or two numbers or a simple piece of data in the main memory (RAM) of the computer; (2) Simple operations to be performed, such as adding two numbers; (3) where to store the result of this simple operation in the main memory; (4) Where to find the next instruction to be executed. Although all executable programs are eventually read by computers in the form of machine language, they are not all written in machine language. It is extremely difficult to program directly in machine language, because instructions are sequences of 0 and 1. A typical machine language instruction can be written as10010101011,which means adding the contents of storage register a and storage register b. High-level language High-level language is a series of relatively complex sentences, which use vocabulary and grammar in human language. High-level language is more similar to normal human language than assembly language or machine language, so it is easier to write complex programs in high-level language. These programming languages can develop larger and more complex programs faster. However, high-level languages must be translated into machine language by another program called a compiler before computers can understand them. For this reason, compared with programs written in assembly language, programs written in high-level languages may run for a long time and occupy more memory. 3. Assembly language Computer programmers use assembly language to make machine language programs easier to write. In assembly language, each statement roughly corresponds to a machine language instruction. Statements in assembly language are written by easy-to-remember commands. In a typical assembly language statement, the command to add the contents of storage register A to the contents of storage register B can be written as Add B. A ... Assembly language and machine language have some common features. For example, it is feasible to manipulate specific bits in assembly language and machine language. When it is important to minimize the running time of a program, programmers use assembly language because the translation from assembly language to machine language is relatively simple. Assembly language is also used when a part of a computer must be directly controlled, such as the flow of a single point on a monitor or a single character on a printer. Third, the classification of high-level languages