Fourth Generation Computers (1971-Present) — КиберПедия 

Историки об Елизавете Петровне: Елизавета попала между двумя встречными культурными течениями, воспитывалась среди новых европейских веяний и преданий...

Индивидуальные и групповые автопоилки: для животных. Схемы и конструкции...

Fourth Generation Computers (1971-Present)

2017-12-22 602
Fourth Generation Computers (1971-Present) 0.00 из 5.00 0 оценок
Заказать работу

After the integrated circuits, the only place to go was down - in size, that is. Large scale integration (LSI) could fit hundreds of components onto one chip. By the 1980's, very large scale integration (VLSI) squeezed hundreds of thousands of components onto a chip. Ultra-large scale integration (ULSI) increased that number into the millions. The ability to fit so much onto an area about half the size of a U.S. dime helped diminish the size and price of computers. It also increased their power, efficiency and reliability. The Intel 4004 chip, developed in 1971, took the integrated circuit one step further by locating all the components of a computer (central processing unit, memory, and input and output controls) on a minuscule chip. Whereas previously the integrated circuit had had to be manufactured to fit a special purpose, now one microprocessor could be manufactured and then programmed to meet any number of demands. Soon everyday household items such as microwave ovens, television sets and automobiles with electronic fuel injection incorporated microprocessors.

Such condensed power allowed everyday people to harness a computer's power. They were no longer developed exclusively for large business or government contracts. By the mid-1970's, computer manufacturers sought to bring computers to general consumers. These minicomputers came complete with user-friendly software packages that offered even non-technical users an array of applications, most popularly word processing and spreadsheet programs. Pioneers in this field were Commodore, Radio Shack and Apple Computers. In the early 1980's, arcade video games such as Рас Man and home video game systems such as the Atari 2600 ignited consumer interest for more sophisticated, programmable home computers. \

In 1981, IBM introduced its personal computer (PC) for use in the home, office and schools. The 1980's saw an expansion in computer use in all three arenas as clones of the IBM PC made the personal computer even more affordable. The number of personal computers in use more than doubled from 2 million in 1981 to 5.5 million in 1982. Ten years later, 65 million PCs were being used. Computers continued their trend toward a smaller size, working their way down from desktop to laptop computers (which could fit inside a briefcase) to palmtop (able to fit inside a breast pocket). In direct competition with IBM's PC was Apple's Macintosh line, introduced in 1984. Notable for its user-friendly design, the Macintosh offered an operating system that allowed users to move screen icons instead of typing instructions. Users controlled the screen cursor using a mouse, a device that mimicked the movement of one's hand on the computer screen.

As computers became more widespread in the workplace, new ways to harness their potential developed. As smaller computers became more powerful, they could be linked together, or networked, to share memory space, software, information and communicate with each other. As op­posed to a mainframe computer, which was one powerful computer that shared time with many terminals for many applications, networked computers allowed individual computers to form electronic co-ops. Using either direct wiring, called a Local Area Network (LAN), or telephone lines, these networks could reach enormous proportions. A global web of computer circuitry, the Internet, for example, links computers worldwide into a single network of information. During the 1992 U.S. presidential election, vice-presidential candidate A1 Gore promised to make the development of this so-called "information superhighway" an administrative priority. Though the possibilities envisioned by Gore and others for such a large network are often years (if not decades) away from realization, the most popular use today for computer networks such as the Internet is electronic mail, or E-mail, which allows users to type in a computer address and send messages through networked terminals across the office or across the world.

Fifth Generation Computers (Present and Beyond)

Defining the fifth generation of computers is somewhat difficult because the field is in its infancy. The most famous example of a fifth generation computer is the fictional HAL9000 from Arthur C. Clarke's novel, 2001: A Space Odyssey. HAL performed all of the functions currently envisioned for real-life fifth generation computers. With artificial intelligence, HAL could reason well enough to hold conversations with its human operators, use visual input, and learn from its own experiences. (Unfortunately, HAL was a little too human and had a psychotic break­down, commandeering a spaceship and killing most humans on board.)

Though the wayward HAL9000 may be far from the reach of real- life computer designers, many of its functions are not. Using recent engineering advances, computers may be able to accept spoken word instructions and imitate human reasoning. The ability to translate a foreign language is also a major goal of fifth generation computers. This feat seemed a simple objective at first, but appeared much more difficult when programmers realized that human understanding relies as much on context and meaning as it does on the simple translation of words.

Many advances in the science of computer design and technology are coming together to enable the creation of fifth generation computers. Two such engineering advances are parallel processing, which replaces von Neumann's single central processing unit design with a system harnessing the power of many CPUs to work as one. Another advance is superconductor technology, which allows the flow of electricity with little or no resistance, greatly improving the speed of information flow. Computers today have some attributes of fifth generation computers. For example, expert systems assist doctors in making diagnoses by applying the problem-solving steps a doctor might use in assessing a patient's needs. It will take several more years of development before expert systems are in widespread use.

 

4. The main part of the computer, the processor, is made of silicon.

Read the text given below and answer:

- what is Russian for "silicon"?

- what properties of silicon allow its use as a material for microchip production? (Use your own erudition.)

 

Silicon - a Natural Element

Silicon is a chemical element - one of the basic building blocks of nature. It is, after oxygen, the most abundant element in the earth's crust, of which it comprises about 28 per cent. In the crust it is never found free, but always combined with other elements - such as silica in common sand.

In 1823, the Swedish chemist Jons Jacob Berzelius was the first to isolate and describe silicon. Silicon is dark grey, hard and non-metallic, and it readily forms crystals.

When perfectly pure, silicon is an electrical insulator - it does not conduct electricity. However, experiments during the early part of the 20th century indicated that, when impure, silicon does allow a feeble current to pass through it. It belongs to a class of material called semiconductors.

 

5. Today the word "chip" is known to everybody. We, people who speak Russian, can understand this word without translation. We got accustomed to using "chip" in the meaning "silicon chip" - a very small piece of silicon contain­ing a set of electronic parts and their connections which is used in computers and other machines. But in English there are a lot of meanings of this word and the first of them is a small piece broken off something or especially in American English - a long thin piece of potato cooked in fat.

Now read the text about silicon chips and answer the questions:

1. Why is the invention of a chip ("microchip") compared with the second In­dustrial Revolution?

2. For what purposes are chips supposed to be used?

3. What kind of changes in employment will the automation inevitably cause?

 

Silicon Chip

Tiny crystal wafers - small enough to go through the eye of a needle - will probably transform society out of all recognition by the turn of the century. They are the miracle products of the fast-growing science of microelectronics, and are commonly called silicon chips.

The most versatile form of chip is the microprocessor. This is termed the 'computer on a chip' because its electronic circuits perform the main functions of a computer - arithmetic, information storage, and so on. Such microprocessors are the 'brains' inside the familiar digital watch and the pocket calculator.

But their usefulness extends beyond computing. Like computers, they can be designed to exercise control. And because of their miniscule size, they can be fitted to virtually anything that needs to be controlled - central-heating systems, cookers, cameras, car engines, combine-harvesters and steel-rolling mills.

Second Industrial Revolution

The 'mighty micro' marks a stage in man's technological evolution as fundamentally important as the manufacture of hand tools in the Old Stone Age, the invention of the wheel about 3500 ВС, and the development of the steam-engine from AD 1700. The micro is poised to do for the human mind what the steam-engine did for human muscle. Because it is so small and can be fitted to almost any machine, it will be the linchpin of the second Industrial Revolution. Machinery will be operated by computer rather than human beings, and automation will dominate industrial production.

The microchip is made up of many electric circuits, through which tiny electric currents flow. (An electric current is a flow of electrons.) In an old radio set, gas-filled valves detected and amplified (or strengthened) radio signals, and converted them into sound signals for the loudspeakers. In these valves electrons were manipulated in various ways, and the science of electron-manipulation became known as electronics.

Using similar valve circuits, the first electronic computer, ENIAC, was completed in the United States in 1945. This led by the mid-1950s to the commercial computers, one of the first of which was the Ferranti Mark I Star, which was built in Britain. This was room-sized, filling two bays 16 ft long, 8 ft high, and 4ft deep. It contained 4,000 valves, 6 miles of wire and 100,000 soldered connections. It needed nearly 30 kilowatts of power to operate. Because valves give out heat, Star needed to be thoroughly air- conditioned.

But the writing was already on the wall for such computers, because in 1948 the transistor had been invented. The transistor manipulated electrons in just the same way as a valve, but was many times smaller, needed less power and was cheaper and more reliable. The microprocessor uses transistors in its circuits, but they are shrunk to microscopic size, though still performing in exactly the same way.

The number of components that can be squeezed on to the chip has virtually doubled each year since 1960. In 1981 one of the most advanced chips, only about 1 /4 in. square, contained 450,000 electronic components - a density of over 7 million components per square inch. It had a computing power thousands of times greater than Star and required only a few thousandths of a watt to operate. Pocket calculators incorporating a silicon chip had a computing power of many room-sized computers of only a decade before.

Brain-Power Computer

The microchip has brought us closer to the day when we can truly call the computer an 'electronic brain'. In the early 1950s it was estimated that to match the capacity of the human brain a computer the size of central London would be required. By 1980 the size of a brain-equivalent computer had shrunk to about the size of the human brain. And it was still shrinking.

It might be expected that something with such incredible computing power as the microprocessor would be expensive. But it is not. By 1980 the unit cost of an advanced microchip was less than £1.50. The reason is that although designing a chip is expensive, and the research and development costs of the chip-maker are enormous, once it has been designed it can be mass produced in vast quantities. The final product is extremely cheap, and getting cheaper all the time.

There is scarcely a field of human activity that will escape the influence of the microchip. Microchips will not only abound in industry and business as computers do now; they will also advance into the office, the home, transport, medicine and even be used on the farm. Virtually every machine, for whatever purpose, will be fitted with a microchip to control it or make it work more efficiently.

One of the earliest applications in the home was in electronic sewing machines introduced by Singer as early as 1975. They were simpler in construction than their electromechanical predecessors.

Micro-controlled cookers are also on the market; their hotplates and ovens can be pre-programmed for a variety of cooking tasks. Within a short space of time other domestic appliances - toasters, washing machines, refrigerators - and central-heating systems will be made more energy efficient by the introduction of the chip.

In the future, it is possible that house-cleaning robots, responding to the housewife's voice, will store information for an entire week's work.

A Home Control System

The day of the home computer is already here. In 1980 the British company Sinclair Electronics was marketing one for under £100, which could be operated when connected to an ordinary television set. Soon the same money will probably buy a much more powerful system that will form the heart of a home communications and control system. It will perform the duties of a housekeeper, supervising the running of the home - controlling heating, cooking, and ordering supplies. It will also act as secretary and accountant, keeping financial records, filing information, and triggering reminders of tasks to be done, such as paying bills.

Manufacturing industry will become increasingly automated as more chip-controlled machinery is installed. Such machinery is ideally suited to do the boring, repetitive tasks called for on an assembly line, for example. And it can do them with greater speed and with greater precision than the human workers it will replace. It can work endlessly, round the clock if necessary, without tiring. The result will be a more reliable product and greater productivity. For the human worker this promises freedom from industrial drudgery and increased leisure time.

The micro-controlled robot assembly worker has been slow in emerging, however. By 1980 the robot machine had been introduced, mainly in Japanese and American car assembly plants, for welding and paint spraying. But the imminent introduction of 'seeing' robots - those that can recognise the shape of objects - will accelerate the process.

Changes in Employment

Just as the first Industrial Revolution in the 18th and 19th centuries brought about a change in the pattern of employment, so will the second. In the first, the introduction of machines led to the fear of unem­ployment among the people whose jobs were being mechanised. And to a certain extent their fears were justified. Individually some suffered a severe loss of income when they had to take work in the factories. But the use of machines increased productivity and stimulated a much greater demand for goods, and as a result employment overall increased rather than decreased.

The automation brought about by the microchip will inevitably put the low-skilled assembly line workers out of work. But it will generate a demand for people with other skills - for example, computer systems designers and skilled electronics engineers. The shorter working hours that the micro will bring about will stimulate the growth of leisure industries such as sport and travel. They, too, will require new people with a variety of different skills. Instead of there being widespread unemployment, it is more likely in the long term that there will be a redeployment of labour.

A survey of industries in the Greater Manchester area was made in 1980 to gauge the effect the introduction of the microchip would have. It concluded that only about 2 per cent of the workforce would be displaced. But a greater percentage would be at risk if microchip technology were not introduced, for the products produced would not be price- competitive with those of firms using the chip.

 

6. Computers save a lot of our time processing information for us. It is very important to understand the system that makes the dialogue with a computer possible.

Read the text "Data Representation: "On/Off' and point out most essential notions which are introduced in it and give definitions to these notions.

 

Data Representation: "On/Off"

People are accustomed to thinking of computers as complex mecha­nisms, but the fact is that these machines basically know only two things: "on" and "off'. This "on/off', "yes/no", two-state system is called the binary system. Using the two states - which can be represented by elec­tricity turned on or off, or a magnetic field that is positive or negative - sophisticated ways of representing data are constructed.

Let us look at one way the two states can be used to represent data (see the Table below). Whereas the decimal number system has a base of 10 (0, 1,2, 3, and so on, to 9), the binary system has a base of 2. This means it has only two digits, 0 and 1, which can correspond to the two states "off' and "on". Combinations of these zeros and ones can then be used to represent larger numbers.

Each 0 or 1 in the binary system is called a bit (for bi nary digit) The bit is the basic unit for storing data in primary storage - 0 for "off', 1 for "on" (see Figure 1).

Since single bits by themselves cannot store all the numbers, letters, and special characters (such as "$" and "?") that must be processed by a computer, the bits are put together in groups, usually of six or eight, and called bytes (see Figure 2). Each byte (pronounced "bite") usually repre­sents one character of data - a letter, digit, or special character.

Computer manufacturers express the capacity of primary storage in terms of the letter K. The letter was originally intended to represent the number 2 to the tenth power, which is 1024, but the meaning has evolved so that now К stands either for 1024 or is rounded off to 1000. A kilobyte is К bytes - that is, 1024 bytes. Kilobyte is abbreviated KB. Thus, 2KB means 2K bytes. Main storage may also be expressed in megabytes, or millions of bytes ("mega" means million). Recently some large computers have expressed main storage in terms of gigabytes - that is billions of bytes.

A small, pocket computer may have less than 2K bytes of primary storage, but most microcomputers have between 4K and 64K bytes. Minicomputers usually range from 64K bytes to 1 megabyte. Mainframes range from 512K bytes (more than half a megabyte) up to 16 megabytes and more.

In advertising computers, the use of the letter К can be confusing. A microcomputer billed as having a "4K memory" may have 4096 bytes or 4096 words. A computer word is defined as the number of bits that constitute a common unit of information, as defined by the computer system. The length of a word varies by computers. Common word lengths in bits are 8 (microcomputers), 16 (traditional minicomputers and some microcomputers), 32 (full-size mainframe computers and some minicomputers), and 64 (supercomputers).

A computer's word size is very important. In general, the larger the word size, the more powerful the computer. A larger word size means:

• The computer can transfer more information at a time, making the computer faster.

• The computer word has room to reference larger addresses, thus allowing more main memory.

• The computer can support a greater number and variety of instructions.

. The internal circuitry of a computer must reflect its word size. Usually, a bus line has the same number of data paths as bits in its word size. Thus, a 16-bit processor will have a 16-bit bus, meaning that data can be sent over the bus lines a word at a time.

COMPUTER WORD SIZES
Bits per Word Representative Computers
  Atari, PET
  IBM PC, HP-3000, Lisa
  PDP-15
  Harris HI00
  IBM 370, VAX 11/780
  Honeywell DPS8, Univac 1100/80
  Burroughs 6700
  CDC 7600
  Cyber 205, Cray-1

 

Some computers are designed to be character-oriented; that is, data is addressed as a series of single characters. When an instruction calls for data to be accessed from a memory location, the data is moved a character at a time, until the required number of characters has been read. In other words, the length of the data being processed may vary. Computers that process data this way are said to use variable-length words.

Other computers move data a word at a time. These computers are said to use fixed-length words. As we mentioned, the size of the word depends on the computer.

1. If we are users of a personal computer, our interaction is possible through commands, which we often choose clicking on this or that icon on the display. But there is a group of specialists who prepare programs for computers and they have to know special languages to communicate with these machines.

 

Read the following text, enumerate the main reasons for creation of a pro­gramming language and answer the questions:

1. How many programming languages are there at present?

2. Name all the programming languages you know.

3. Is there any difference between a human language and a programming lan­guage? What is it?

4. How many levels of language are mentioned by the author? Characterize each of them.

5. What kind of changes in the communication between a man and a computer have happened recently?

6. What are the main approaches to choosing a programming language? (Use your own experience.)

 

Programming Language: A Tower of Babel?

You ask your friend why he is looking so excited. "I'm mad about my flat," he says.

What does that mean? In the United States it probably means he is angry about the bad tire on his car. In England, however, it could mean he is enthusiastic about his apartment.

That is the trouble with the English language, or with any human language, for that matter. A natural language is loosely configured, ambiguous, full of colloquialisms, slang, variations, and complexities. And, of course, it is constantly changing. On the other hand, a programming language - a set of rules that provides a way of instructing the computer what operations to perform - is anything but loose and ambiguous: its vocabulary has specific meaning. It is not mad about its flat.

A programming language, the key to communicating with the computer, has certain definite characteristics. It has a limited vocabulary. Each word in it has precise meaning. Even though a programming language is limited in words, it can still be used, in step-by-step fashion, lo solve complex problems. There is not, however, just one programming language; there are many.

At present, there are over 150 programming languages - and these are the ones that are still being used. We are not counting the hundreds of languages that for one reason or another have fallen by the wayside over the years. Some of the languages have rather colorful names: SNOBOL, STUDENT, HEARSAY, DOCTOR, ACTORS, JOVIAL. Where did all these languages come from? Do we really need to complicate the world further by adding programming languages to the Tower of Babel of human languages?

Programming languages initially were created by people in universities or in the government and were devised for special functions. Some languages have endured because they serve special purposes in science, engineering, and the like. However, it soon became clear that some standardization was needed. It made sense for those working on similar tasks to use the same language.

 

Levels of Language

Programming languages are said to be "lower" or "higher", depending on whether they are closer to the language the computer itself uses (0s and Is - low) or to the language people use (more English-like - high). We shall consider these levels of language:

• Machine language

• Assembly language

• High-level language

• Nonprocedural language

Let us look at each of these categories.

 

Machine Language

Humans do not like to deal in numbers alone (they prefer letters and words). But, strictly speaking, numbers are what machine language is. This lowest level of language, machine language, represents information as Is and 0s - binary digits corresponding to the "on" and "off" electrical states in the computer.

In the early days of computing, each computer had its own machine language, and programmers had rudimentary systems for combining numbers to represent instructions such as "add" and "compare". Primitive by today's standards, the programs were not at all convenient for people to read and use. As a result, the computer industry moved to develop assembly languages.

Assembly Languages

Today assembly languages are considered fairly low-level - that is, they are not as convenient for people to use as more recent languages. At the time they were developed, however, they were considered a great leap forward. Rather than using simply Is and Os, assembly language uses abbreviations or mnemonic codes to replace the numbers: A for "Add", С for "Compare", MP for "Multiply", and so on. Although these codes were not English words, they were still - from the standpoint of human convenience - preferable to numbers alone.

The programmer who uses an assembly language requires a translator to convert his or her assembly-language program into machine language. A translator is needed because machine language is the only language the computer can actually execute. The translator is an assembler program, also referred to as an assembler. It takes the programs written in assembly language and turns them into machine language. A pro­grammer need not worry about the translating aspect; he or she need only write programs in assembly language. The translation is taken care of by the computer system.

Although assembly languages represent a step forward, they still have many disadvantages. One is that the assembly language varies according to the type of computer.

Another disadvantage of assembly language is the one-to-one relationship between the assembly language and machine language - that is, for every command in assembly language there is one command in machine language.

High-Level Languages

The invention of high-level languages in the mid-1950s transformed programming into something quite different from what it had been. The harried programmer working on the details of coding and machines became a programmer who could pay more attention to solving the client's problems. The programs could solve much more complex problems. At the same time, they were written in an English-like manner, thus making them more convenient to use. As a result of these changes, the program­mer could accomplish more with less effort.

Of course, a translator was needed to translate the symbolic statements of a high-level language into computer-executable machine language; this translator is called a compiler. There are many compilers for each language, one for each type of computer.

 

Nonprocedural Languages: Computers for Everybody

I WANT A REPORT FOR ALL SALES PEOPLE IN REGION 20 IN ORDER BY EMPLOYEE NUMBER WITHIN DEPARTMENT, SHOWING JUNE SALES AND YEAR-TO-DATE SALES.

You cannot get much closer to English than this. This is an example of a nonprocedural language, a language intended primarily to help non- programmers access a computer's data base. For example, a customer service representative of a car-rental agency might need to know which cars are out and when they are due back.

Examples of nonprocedural languages are FOCUS, RAMIS, NOMAD, INQUIRE, and MARK IV. With nonprocedural languages, it is clear that access to computer information for the masses is at hand.

 

8. When you think about the development of computers you may have vari­ous associations, and Silicon Valley might be one of them.

Read the text given below and answer the questions:

1. What is "Silicon Valley"?

2. When and where did it appear?

3. What was the reason for its origin?

4. What famous companies were mentioned in the text?

5. Why did the author consider that "Silicon Valley" was a way of life? Do you agree with him? Put forward your arguments.

Point out several stages of development of Silicon Valley. Explain your decision.

 

Silicon Valley

It was not called "Silicon Valley" when I was growing up there in the 1940s and '50s. It was simply the Santa Clara Valley, a previously agricultural area of apricot and cherry orchards rapidly filling with suburban housing. Industrial "parks" also appeared as the postwar boom in electronics took hold in California. Blessed with a temperate climate, the valley stretches beside San Francisco Bay from the college town of Palo Alto to what was once the sleepy city of San Jose.

Today this is the nation's ninth largest manufacturing center, with the fastest-growing and wealthiest economy in the United States. In the last 10 years, San Jose has grown by over a third, jumping from 29th to 17th largest city in the United States. In the same period, the median family income in the valley went from $18,000 a year to an estimated $27,000. There are 6000 Ph.D.s living here - one of every six doctorates in California - and they are a hard-working lot. Many engineers put in 15-hour days and seven-day work weeks, and talk about achieving success in 10 years. The rewards they seek are apt to be the more material badges of success, such as cars and real estate. Porsches and Mercedes abound, and one local Ferrari dealership is second in size only to the one in Beverly Hills. A Monopoly-like board game developed locally is almost a satire on this success/failure frenzy. Called "Silicon Valley: In the Chips", it has very little to do with silicon chips and computers. Rather, the object of the game is "to negotiate your way through the valley and make your wealth through proper management of your income in home purchases and business investments".

As one who has watched not a few cow pastures become parking lots, I regard all this change with a great deal of ambivalence. How did such a concentration of high-tech industry come about?

One name often mentioned as being pivotal is Frederick Terman. In the 1940s, Stanford University, located near Palo Alto, was a respectable regional university, but not yet the world-class institution of higher learning it is today. For development on its scientific and technical side, a great deal of credit must go to Terman, who in 1946 became dean of the School of Engineering. On one hand, Terman urged former students with last names like Hewlett and Packard and Varian to establish their electronics businesses locally. On the other hand, he wholeheartedly encouraged Stanford to join the effort of establishing the region as a center of advanced technology. He encouraged engineering faculty to go out and consult. He offered training to industry engineers. He sat on the boards of small businesses. He helped persuade the university administration and trustees to lease Stanford land to local electronics compa­nies, thus beginning the Stanford Industrial Park, the nucleus of commercial high technology in the region.

Today the 660-acre park has some 70 advanced-technology businesses located there. Hewlett-Packard Co., Varian Associates, and other early tenants were followed into the valley in the 1950s by such large firms as Lockheed, General Electric, Ford, and GTE. The U.S. government established research facilities at Moffett Naval Air Station and nearby Berkeley and Livermore.

It has been said there would be no "Silicon Valley", however, if William Shockley's mother had not lived in Palo Alto. One of the inventors of the transistor (for which he won a Nobel Prize) while he was at Bell Laboratories in New Jersey, Shockley returned to the town where he was raised and in 1956 set up Shockley Transistor Co. Two years later, several of his associates left and set up Fairchild Semiconductors Co., which many observers believe represents the true beginning of the semiconductor industry. The 1960s became a turbulent time as many others left Fairchild to start companies with now well-known names such as National Semiconductor, Intel, and Advanced Micro Devices. Among computer manufacturers, IBM was the first to arrive in the valley, but one of its executives, Gene Amdahl, resigned in 1970 and started his own company. Tandem Computers, Inc., was founded in 1974 by several former Hewlett-Packard employees. Peripheral equipment manufacturers - makers of storage devices and media and related equipment - also sprang up. Ampex, started in 1944 and a pioneer in magnetic recording systems, was followed by companies such as Memorex, started in 1961. Electronic games, began when Atari, Inc., created "Pong" in 1972. The company now makes personal computers, but it was not prepared to enter into that market when one of its employees, a young college dropout named Steve Jobs, first urged it to do so. Jobs joined forces with Steve Wozniak of Hewlett-Packard and founded Apple Computer, one of the valley's huge success stories.

Today, the Santa Clara Valley seems to an old-time resident to be strangling on its own success. Housing is among the most expensive in the country: former $25,000 homes sell for $300,000 and up. The pace and intensity of work leads to job burnout, and the divorce rate is higher than the rate for the state as a whole. Traffic chokes the eight-lane freeways. Local zoning boards and city councils are resisting further growth.

However, Silicon Valley is no longer a single region. It is a way of life. "Silicon Valley" has moved beyond Santa Clara County to the so-called 128 Belt of Boston; to the "Sci/Com" area along Route 270 outside of Washing­ton, D.C.; to Colorado; to Oregon - and to many places overseas.

 

9. In many ways telecommuting is an extension of the characteristics found in personal computing. There is also increased freedom and control - in this case, over time and space. You can now work at your own pace in the environment of your choice...

Read the text "The Evolution of Data Communications" and

- enumerate all its stages;

- explain the necessity of appearance of Local Area Networks.

-

The Evolution of Data Communications

Mail, telephone, TV and radio, books, and periodicals s these are the principal ways we send and receive information. They have not changed appreciably in a generation. However, data communications systems - computer systems that transmit data over communication lines such as telephone lines or coaxial cables - have been gradually evolving through the past two decades. Let us take a look at how they came about.

In the early days, computers were often found in several departments of large companies. Any department within an organization that could justify the expenditure acquired its own computer. There could be, for example, different computers to support engineering, accounting, and manufacturing. However, because department managers generally did not know enough about computers to use them efficiently, these expenditures were often wasteful. The response to this problem was to centralize computer operations.

Centralization produced better control, and the consolidation of equipment led to economies of scale; that is, hardware and supplies could be purchased in bulk at cheaper cost. Centralized data processing placed everything in one central company location: all processing, hardware, software, and storage. Computer manufacturers responded to this trend by building large, general-purpose computers so that all departments within an organization could be serviced efficiently. IBM's contribution was the IBM/360 computer, so called because it provided the full spectrum s "360 degrees" s of services.

Eventually, however, total centralization proved inconvenient. All input data had to be physically transported to the computer and all processed material picked up and delivered to the users. Insisting on centralized data processing was like insisting that all conversations between people be face to face. Clearly, the next logical step was to connect users via telephone lines and terminals to the central computer. Thus, in the 1960s, the centralized system was made more flexible by the introduction of time-sharing through teleprocessing systems s terminals connected to the central computer via communication lines. This permitted users to have remote access to the central computer from other buildings and even other cities.

 

10. Now we cannot imagine our life without possibility to send e-mail mes­sages to communicate with other people by means of Internet. But when and how did it become possible?

Read the text about the first e-mail message and about the discoveries which led to the possibility of sending it and after that explain why e-mail became so popular all over the world.

 

E-mail and How It Became Possible

Three, maybe four times in recent history, a new technology has been introduced that has fundamentally transformed human society by changing the way people communicate with each other. For the most part, the moment in which these new technologies came into being are preserved with a kind of clarity and drama that is both thrilling and unforgettable.

There is Samuel B. Morse and the first telegram. Delivered on May 24,1844, the message read "What hath god wrought!" Morse knew that he was making history.

And there was the dawn of the telephone era, heralded by Alexander Graham Bell's less grand, though still legendary, summons to his assistant on March 10,1876: "Mr. Watson, come here; I want you."

While the exact wording of Guglielmo Marconi's first wireless transmission in 1895 is not the stuff of legend, it didn't take long for Marconi to be heaped with honors and awards, topped off by a Nobel Prize for physics in 1909. And even 30 years later the inauguration of wireless service between England and South Africa felt like an historic event to the participants. "We speak across time and space May the new power promote peace between all nations," read the Marconigram sent from Sir Edgar Walton, high commissioner of South Africa, to Genera] J. В. M. Hertzog, South Africa's prime minister, in 1924."

Sometime in late 1971, a computer engineer named Ray Tomlinson sent the first e-mail message. "I sent a number of test messages to myself from one machine to the other," he recalls now. "The test messages were entirely forgettable Most likely the first message was QWERTYIOP or something similar."

Tomlinson's name hardly lives in the public mind. When he is remembered at all, it is as the man who picked @ as the locator symbol in electronic addresses. In truth though, he is the inventor of e-mail, the application that launched the digital information revolution.

Tomlinson worked for Bolt Beranek and Newman (BBN), the company hired by the United States Defense Department in 1968 to build ARPANET, the precursor to the Internet. In 1971 he was tinkering around with an electronic message program called SNDMSG, which he had written to allow programmers and researchers to leave messages for each other.

But this was not e-mail, exactly. Like a number of then existing electronic message programs, the oldest dating from the early 1960s, SNDMSG only worked locally; it was designed to allow the exchange of messages between users who shared the same machine. Such users could create a text file and deliver it to a designated "mail box".

When Tomlinson sat down to play around with SNDMSG, he had been working on an experimental file transfer protocol called CYPNET, for transferring files among linked computers at remote sites within ARPANET.

The way CYPNET was originally written, it sent and received files, but had no provision for appending to a file. So he set out to adapt CYPNET to use SNDMSG to deliver messages to mailboxes on remote machines, through the ARPANET.

What Tomlinson did next, if he had fully grasped its significance, might have earned him a place alongside the giants of communication history.

First, he chose the @ symbol to distinguish between messages addressed to mailboxes in the local machine and messages that were headed out onto the network.

Then he sent himself an e-mail message. BBN had two PDP-10 computers wired together through the ARPANET. "The first message was sent between two machines that were literally side-by-side. The only physical connection they had, however, was through the ARPANET," according to Tomlinson.

The message flew out via the network between two machines in the same room in Cambridge; and the message was QWERTYIOP. Or something like that.

Once Tomlinson was satisfied that SNDMSG worked on the network, he sent a message to colleagues letting them know about the new feature, with instructions for placing an @ in between the user's login name and the name of his host computer. "The first use of network mail," says Tomlinson, "announced its own existence."

The engineers and scientists quickly adopted it as the preferred mode of day-to-day communications.

In a paper published in 1978 by the Institute of Electrical and Electronic Engineers, two of the important figures in the creation of the ARPANET, J.C.R. Licklider and Albert Vezza, explained the popularity of e-mail. "One of the advantages of the message systems over letter mail was that, in an ARPANET message, one could write tersely and type imperfectly, even to an older person in a superior position and even to a person one did not know very well, and the recipient took no offense.... Among the advantages of the network message services over the telephone were the fact that one could proceed immediately to the point without having to engage in small talk first, that the message services produced a preservable record, and that the sender and receiver did not have to be available at the same time."

Tomlinson still works at BBN, which was acquired last year by GTE. A principal engineer for BBN, he is currently working on a project that involves "developing an architecture for quickly building distributed information integration and visualization tools". In the nearly three decades that have elapsed since he invented e-mail, he has worked on everything from network protocols to supercomputer design.

Recently, Tomlinson answered a few questions about his role as the inventor of e-mail. The interviews were conducted, appropriately enough, through e-mail.

Asked what inspired his invention, his response comes back as un- dramatic as the event itself: "Mostly because it seemed like a neat idea," he writes. "There was no directive to 'go forth and invent e-mail'."

Like many of the men involved in the creation of APRANET, he looks back on the late 1960s and early 1970s as a golden age in both computer research and their own careers.

Finally, what of his place in history? Morse, Bell, Marconi. And Tomlinson?

"The pace [of progress] has accelerated tremendously since the [names] you mention. This means that any single development is stepping on the heels of the previous one and is so closely followed by the next that most advances are obscured," writes the inventor of e-mail, modestly. "I think that few individuals will be remembered."

But he hasn't quite given up hope. "I am curious to find out if I am wrong," he adds.

 

11. Using Internet we buy food, book hotelrooms and tickets, find husbands and wives, do research. But maybe not all of us know how and when it was created.

Read the text given below and express your personal attitude towards the problem of "Global brain". Is it desirable to build it? Put forward your argu­ments.

 

Telecommunications: Global Brain?

The immense economic power of the telecommunications industry - institutes the "base camp" from which computer power will assault the old structures. One example: in the single area of "computer conferencing" (the use of computers to link people together), some scientists have; already envisioned the rapid obsolescence of many education techniques, the electronic replacement of 80 percent of business mail, and a significant alteration of transportation and settlement patterns. When these effects were first suggested in a Futurist article in 1974, there were only about a hundred persons in the world engaging in such "computer conferencing". By the end of the decade they numbered in the thousands, and Dr. Michael Arbib suggested that the building of a "Global Brain for Mankind" was an urgent necessity. Can we build such a brain? Is it desirable to build it?

 

12. Read the text "The History of Internet" and prepare an oral report on (lie topic "The role of Internet in my life and in the life of society" trying to reflect positive and negative sides of using Internet so widely.

 

The History of Internet

The USSR launches Sputnik, the first artificial earth satellite. In response, the United States forms the Advanced Research Projects Agency (ARPA) within the Department of Defense (DoD) to establish US lead in science and technology applicable to the military.

RAND Paul Baran, of the RAND Corporation (a government agency), was commissioned by the U.S. Air Force to do a study on how it could maintain its command and control over its missiles and bombers, after a nuclear attack. This was to be a military research network that could survive a nuclear strike, decentralized so that if any locations (cities) in the U.S. were attacked, the military could still have control of nuclear arms for a counter-attack.

Baran's finished document described several ways to accomplish this. His final proposal was a packet switched network.

"Packet switching is the breaking down ofdata into datagrams or packets that are labeled to indicate the origin and the destination of the information and the forwarding of these packets from one computer to another computer until the information arrives at its final destination computer. This was crucial to the realization of a computer network. If packets are lost at any given point, the message can be resent by the originator. "

ARPA awarded the ARPANET contract to BBN. BBN had select­ed a Honeywell minicomputer as the base on which they would build the switch. The physical network was constructed in 1969, linking four nodes: University of California at Los Angeles, SRI (in Stanford), University of California at Santa Barbara, and University of Utah. The network was wired together via 50 Kbps circuits.

The first e-mail program was created by Ray Tomlinson of BBN.

The Advanced Research Projects Agency (ARPA) was renamed The Defense Advanced Research Projects Agency (or DARPA)

ARPANET was currently using the Network Control Protocol or NCP to transfer data. This allowed communications between hosts run­ning on the same network.

Development began on the protocol later to be called TCPAP, it was developed by a group headed by Vinton Cerf from Stanford and Bob Kahn from DARPA. This new protocol was to allow diverse computer networks to interconnect and communicate with each other.

First use of term "Internet" by Vint Cerf and Bob Kahn in paper on Transmission Control Protocol.

Dr. Robert M. Metcalfe develops Ethernet, which allowed coaxial cable to move data extremely fast. This was a crucial component to the development of LANs.

The packet satellite project went into practical use. SATNET, Atlantic packet Satellite network, was born. This network linked the United States with Europe. Surprisingly, it used commercial Intelsat satel­lites that were owned by the International Telecommunications Satellite Organization, rather than government satellites.

UUCP (Unix-to-Unix CoPy) developed at AT&T Bell Labs and distributed with UNIX one year later.

The Department of Defense began to experiment with the TCP/IP protocol and soon decided to require it for use on ARPANET.

USENET (the decentralized news group network) was created by Steve Bellovin, a graduate student at University of North Carolina, and programmers Tom Truscott and Jim Ellis. It was based on UUCP.

The Creation of BITNET, by IBM, "Because its Time Network", introduced the "store and forward" network. It was used for email and listservs.

National Science Foundation created backbone called CSNET 56 Kbps network for institutions without access to ARPANET. Vinton Cerf proposed a plan for an inter-network connection between CSNET and the ARPANET.

Internet Activities Board (IAB) was created in 1983.

On January 1st, every machine connected to ARPANET had to use TCP/IP. TCP/IP became the core Internet protocol and replaced NCP entirely.

The University of Wisconsin created Domain Name System (DNS). This allowed packets to be directed to a domain name, which would be translated by the server database into the corresponding IP number. This made it much easier for people to access other servers, because they no longer had to remember numbers.

The ARPANET was divided into two networks: MILNET and ARPANET. MILNET was to serve the needs of the military and ARPANET to support the advanced research component, Department of Defense continued to support both networks.

Upgrade to CSNET was contracted to MCI. New circuits would be T1 lines, 1.5 Mbps which is twenty-five times faster than the old 56 Kbps lines. IBM would provide advanced routers and Merit would manage the network. New network was to be called NSFNET (National Science Foundation Network), and old lines were to remain called CSNET.

The National Science Foundation began deploying its new T1 lines, which would be finished by 1988.

The Internet Engineering Task Force or IETF was created to serve as a forum for technical coordination by contractors for DARPA working on ARPANET, US Defense Data Network (DDN), and the Internet core gateway system.

BITNET and CSNET merged to form the Corporation for Research and Foundation Networking (CREN), another work of the National Science Foundation.

Soon after the completion of the T1 NSFNET backbone, traffic increased so quickly that plans immediately began on upgrading the network again.

Merit and its partners formed a not for profit corporation called ANS, Advanced Network Systems, which was to conduct research into high speed networking. It soon came up with the concept of the T3, a 45 Mbps line. NSF quickly adopted the new network and by the end of 1991 all of its sites were connected by this new backbone.

While the T3 lines were being constructed, the Department of Defense disbanded the ARPANET and it was replaced by the NSFNET backbone. The original 50Kbs lines of ARPANET were taken out of service.

Tim Berners-Lee and CERN in Geneva implements a hypertext system to provide efficient information access to the members of the international high-energy physics community.

CSNET (which consisted of 56Kbps lines) was discontinued having fulfilled its important early role in the provision of academic networking service. A key feature of CREN is that its operational costs are fully met through dues paid by its member organizations.

The NSF established a new network, named NREN, the National Research and Education Network. The purpose of this network is to conduct high speed networking research. It was not to be used as a com­mercial network, nor was it to be used to send a lot of the data that the Internet now transfers.

Internet Society is chartered.

World-Wide Web released by CERN.

NSFNET backbone upgraded to T3 (44.736 Mbps)

InterNIC created by NSF to provide specific Internet services: direc­tory and database services (by AT&T), registration services (by Network Solutions Inc.), and information services (by General Atomics/CERFnet).

Marc Andreessen and NCSA and the University of Illinois develops a graphical user interface to the WWW, called "Mosaic for X".

No major changes were made to the physical network. The most significant thing that happened was the growth. Many new networks were added to the NSF backbone. Hundreds of thousands of new hosts were added to the INTERNET during this time period.

Pizza Hut offers pizza ordering on its Web page.

First Virtual, the first cyberbank, opens.

ATM (Asynchronous Transmission Mode, 145 Mbps) backbone is installed on NSFNET.

The National Science Foundation announced that as of April 30, 1995 it would no longer allow direct access to the NSF backbone. The National Science Foundation contracted with four companies that would be providers of access to the NSF backbone (Merit). These companies would then sell connections to groups, organizations, and companies.

$50 annual fee is imposed on domains, excluding.edu and.gov do­mains which are still funded by the National Science Foundation.

1996-Date

Most Internet traffic is carried by backbones of independent ISPs, including MCI, AT&T, Sprint, UUnet, BBN planet, ANS, and more.

Currently the Internet Society, the group that controls the INTERNET, is trying to figure out new TCP/IP to be able to have billions of addresses, rather than the limited system of today. The problem that has arisen is that it is not known how both the old and the new addressing systems will be able to work at the same time during a transition period.

 

13. It is exciting to know something about our future. That's why fortune tellers will never lose their job though not everything they say comes true.

Now you see a text which contains many predictions. It was written in 1970s. Compare the statements of the prediction with the real level of development of computers and make your conclusion which statements came true and which have not come true yet.

Predictions

Computerized conferencing will be a prominent form of communications in most organizations by the mid-1980s. By the mid-1990s, it will be as widely used in society as the telephone today.

• It will offer a home recreational use that will make significant inroads into TV viewing patterns.

• It will have dramatic psychological and sociological impacts on various group communication objectives and processes.

It will be cheaper than mails or long distance telephone voice communications.

• It will offer major opportunities to disadvantaged groups in the society to acquire the skills and social ties they need to become full members of the society.

It will have dramatic impacts on the degree of centralization or decentralization possible in organizations.

It will have a fundamental mechanism for individuals to form groups having common concerns, interests or 'purposes.

• It will facilitate working at home for a large percentage of the work force during at least half of their normal work week.

It will have a dramatic impact upon the formation of political and special interest groups.

• It will open the doors to new and unique types of services.

It will indirectly allow for sizable amounts of energy conservation through substitution of communication for travel. It will dramatically alter the nature of social science research concerned with the study of human systems and human communications processes.

• It will facilitate a richness and variability of human groupings and relationships almost impossible to comprehend.

14. Remember what you know about computer crimes in our country. Is it profitable for computer criminals to rob banks?

Read the text and say how many types of computer crimes the author men­tioned. Is it difficult to discover a computer crime and the causes behind it?

Computer Crime

Stories about computer crime have a lot of appeal. Like good detective stories, they often embody cleverness. They are "clean" white-collar crimes; no one gets physically hurt. They feature people beating the system, making the Big Score against an anonymous, faceless, presumably wealthy organization.

But computer crime is serious business and deserves to be taken seriously by everyone. After all, if computer criminals can steal money from major banks, can they not steal from you? If unauthorized persons can get at your money, can they not also get at your medical records or private family history? It is not a long step between a thief violating your bank account and an unseen "investigator" spying on your private life.

The FBI reports that whereas the average bank robbery involves $3200 and the average bank fraud $23,500, the average computer crime involves half a million dollars. No one knows the dollar figure of unre­ported computer crime, but some estimates put it as high as $3 billion a year. Whatever it is, some judge it to be 20 times the annual take of a decade ago.

Needless to say, the problems of computer crime will only be aggravated by increased access. More employees will have access to computers on their jobs. A great many more people will be using personal computers. And more students will be taking computer training. Computer crime basically falls into three categories:

• Theft of computer time for development of software for personal use or with the intention of selling it. It is difficult to prove programs were stolen when copies are made because the originals are still in the hands of the original owners.

• Theft, destruction, or manipulation of programs or data. Such acts may be committed by disgruntled employees or by persons wishing to use another's property for their own benefit.

• Altering data stored in a computer file.

While it is not our purpose to be a how-to book on computer crime, we will mention a few criminal methods as examples. The Trojan Horse is the name given to the crime in which a computer criminal is able to place instructions in someone else's program that allow the program to function normally but to perform additional, illegitimate functions as well. The salami method describes an embezzlement technique that gets its name from taking a "slice" at a time. The salami technique came into its own with computers - the taking of a little bit at a time, such as a few cents from many bank accounts. Obviously, this activity was not worth the effort in precomputer days. Data diddling is a technique whereby data is modified before it goes into the computer file. Once in the file, it is not as visible.

Discovery and Prosecution

Prosecuting the computer criminal is difficult because discovery is often difficult. The nature of the crime is such that it is hard to detect, and thus many times it simply goes undetected. In addition, crimes that. are detected are - an estimated 85 percent of the time - never reported to the authorities. By law, banks have to make a report when their computer systems have been compromised, but other businesses do not. Often they choose not to report because they are worried about their reputations and credibility in the community.

Most computer crimes, unfortunately, are discovered by accident.

For example, a programmer changed a program to add ten cents to every customer service charge under $10 and one dollar to every charge over $10. He then placed this overage into the last account, a bank account he opened himself in the name of Zzwicke. The system worked fairly well, generating several hundred dollars each month, until the bank initiated a new marketing campaign in which they singled out for special honors the very first depositor - and the very last.

Even if a computer crime is detected, prosecution is by no means assured. There are a number of reasons for this. First, law enforcement agencies do not fully understand the complexities of computer-related fraud. Second, few attorneys are qualified to handle computer crime cases. Third, judges and juries are not educated in the ways of computers and may not consider data valuable. They may see the computer as the v 1 Main and the computer criminal as the hero. (Computer criminals often tend to be much like those trying to prosecute them - the same age, educational background, and social status - and may even be high up in the company and friends of top management.) Finally, the laws in many states do not adequately define computer crime.

This last point is illustrated by a famous case in which two programmers stole $244,000 worth of their employer's computer time in order to rescore music for a private business they were operating on the side. The law under which they were convicted? Mail fraud. It was the best criminal law the prosecutor could find that would apply. Although the common notion of larceny is taking goods belonging to another with the intention of depriving the owner of them permanently, this definition is not well suited to the theft of computer time. Time is not a tangible object, and it is difficult to determine its market value.

In short, the chances of committing computer crimes and having them go undetected are, unfortunately, good. And the chances that, if detected, there will be no ramifications are also good: a computer criminal may not go to jail, may not be found guilty if prosecuted, and may not even be prosecuted. You can be sure, however, that this will not be tolerated for long. One example of the countermeasures being taken is the four-week training school offered by the FBI in computer crime; the hundreds of agents that have passed through the school are now in FBI offices throughout the country.

 

T


Поделиться с друзьями:

История развития хранилищ для нефти: Первые склады нефти появились в XVII веке. Они представляли собой землянные ямы-амбара глубиной 4…5 м...

Двойное оплодотворение у цветковых растений: Оплодотворение - это процесс слияния мужской и женской половых клеток с образованием зиготы...

Поперечные профили набережных и береговой полосы: На городских территориях берегоукрепление проектируют с учетом технических и экономических требований, но особое значение придают эстетическим...

Типы сооружений для обработки осадков: Септиками называются сооружения, в которых одновременно происходят осветление сточной жидкости...



© cyberpedia.su 2017-2024 - Не является автором материалов. Исключительное право сохранено за автором текста.
Если вы не хотите, чтобы данный материал был у нас на сайте, перейдите по ссылке: Нарушение авторских прав. Мы поможем в написании вашей работы!

0.41 с.