Maxwell, Hertz, and German radio-wave history — КиберПедия 

Механическое удерживание земляных масс: Механическое удерживание земляных масс на склоне обеспечивают контрфорсными сооружениями различных конструкций...

Двойное оплодотворение у цветковых растений: Оплодотворение - это процесс слияния мужской и женской половых клеток с образованием зиготы...

Maxwell, Hertz, and German radio-wave history

2019-08-03 177
Maxwell, Hertz, and German radio-wave history 0.00 из 5.00 0 оценок
Заказать работу

Although early reports about electricity and magnetism date back before Christ, it took another 2000 years until in 'the eighteenth century, men like B. Franklin, A. Volta, C. Coulomb, L. Galvani, and many others studied more intensely electrostatic and magnetostatic effects. In contrast to mechanics, hydrodynamics, and astronomy, which belonged to the mathematics discipline, electricity and magnetism were usually investigated by physicians, pharmacists, priests, philosophers, chemists, and fascinated amateurs. However, at the end of the eighteenth century and the beginning of the nineteenth century, researchers with mathematical background took over in France and later in Great Britain and Germany. Because of the many schools of thought and parallel developments in the nineteenth century, it is appropriate to first mention the many evolutionary achievements made outside Germany and then consider German contributions.

 

ACHIEVEMENTS OUTSIDE GERMANY

 

The basic ingredients of radio waves, electric and magnetic fields, have their roots in observations made by Oerstedt, Ampere, Faraday, and Maxwell in the decades 1820-1870. Oerstedt (Denmark) discovered in 1820 that flowing electricity, nowadays designated as current, exerted forces on metal. This was a major milestone because previous experiments had dealt only with electrostatics of charged bodies or voltages from electrochemical batteries. Moreover, Oerstedt stressed the importance of a closed circuit in order that electricity could flow and discovered the circular nature of magnetism caused by flowing electricity. Upon his discoveries, research of electricity and magnetism attained a new quality. Previously, electricity and magnetism were considered isolated phenomena; now, attention was payed to their interrelationship. Ampere followed up Oerstedt's findings in the very same year, discovered mutual forces between current-carrying conductors, delivered an equation permitting the calculation of the forces between the conductors, and postulated circular currents as unique sources of all manifestations of magnetism.

With their experiments, Oerstedt and Ampere had shown that electricity could be transformed into magnetism. In 1831, Faraday demonstrated the reverse phenomenon and showed that magnetism could as well be transformed into electricity. He had, in other words, discovered the phenomenon of induction. Moreover, he coined the concents of lines of force, dielectrics, and, eventually, fields (1845). He even suggested an inteipretation of light as undulations of his lines offeree.

For most of Faraday's contemporaries, nature consisted of discrete particles that, in analogy to Newton's gravitational law, exerted instantaneous attracting or repelling forces upon each other over large distances, the so-called theory of action at a distance. Its foremost protagonists were Ampere. Neumann, and W. Weber. In contrast, Faraday interpreted nature to be macroscopically continuous, and forces on a body brought into the environment of con­ductors or poles would be related to local quantities at that body's location (theory of local action). He claimed that his lines offeree represented continuous fields and in addition to that, could be curved, which strongly opposed the widely accepted Newtonian-mechanics-based approach with its straight lines offeree.

One of the few contemporaries who supported Faraday's field concept was Maxwell, who in 1862 published an article "On Physical Lines of Force." His subsequent thoughts were published in 1865 in his article "A Dynamical Theory of the Electromagnetic Field." Maxwell brought existing equations and his own genuine concepts together and composed a consistent set of equations. Hertz who felt great admiration for Maxwell, stated: "The theory of Maxwell is best defined as the System of Maxwell's Equations."

The most outstanding innovative feature of Maxwell's mathematical models of electric and magnetic fields was the concept of displacement current Jd = ∂D/∂t, which allowed for the continuity of current flow also in open circuits, in other words, in nonconducting media. As a consequence of that, his displacement current allowed the derivation of wave equations for electric and magnetic fields, formally identical with wave equations known from continuum mechanics.

 

PROGRAMMABLE CONTROLLERS

 

In the 1960s, computers were considered by many in industry as the ultimate way of increasing efficiency, reliability, productivity, and automation of industrial processes. Computers possessed the ability to acquire and analyze data at extremely high speeds, make decisions, and then disseminate information back to the control process. However, there were disadvantages associated with computer control, such as high cost, complexity of programs, hesitancy on the part of industry personnel to rely on a machine, and lack of personnel trained in computer technology. Thus, computer applications through the 1960s were mostly in the area of data collection, on-line monitoring, and open-loop advisory control.

During the mid 1960s, though, a new concept of electronic controllers evolved, the programmable controllers (PCs). This concept developed from a mix of solid-state computer technology and traditional sequential controllers, such as the stepping drum (a mechanical rotating switching device) and the solid-state programmer with plug-in modules. This new device first came about as a result of problems faced by the auto industry, which had to scrap costly assembly line controls each time a new model went into production. The first PCs were installed in 1969 as electronic replacements of electromechanical relay controls. The PC presented the best compromise of existing relay ladder schematic techniques (a topic that will be discussed later in this chapter) and expanding solid-state technology. It increased the efficiency of the auto industry's system by eliminating the costly job of rewiring relay controls used in the assembly line process. The PC reduced the changeover downtime, increased flexibility, and considerably reduced the space requirements formerly used by the relay controls.

In this chapter, we will first discuss the basic concept of sequential control. Next, we will describe the programmable controller. We end the chapter with a description of a specific PC, the Allen-Bradley Bulletin 1772. The PC has gained wide acceptance in industry because of its ability to reconfigure processes economically and to be programmed easily. However, the 1772 is not the only PC on the market. Currently, there are about thirtyeight companies in the United States that manufacture PCs.

 

Sequential Control

 

Traditionally, industrial processes and control systems used relays, timers, and counters. These devices constitute a class of control systems used to control processes known as sequential control processes. A sequential process is a process in which one event follows another until the job is completed.

In this section, we first present an example of a sequential process. This process could be controlled either by the traditional electromechanical devices or by a PC.

 

Automatic Mixer

 

The tank in the diagram is filled with a fluid, agitated for a length of time, and then emptied. A state description is similar to a flowchart in computer programming. This sequential process is the kind of process that can easily be handled by a programmable controller.

A ladder diagram is a diagram with a vertical line (the power line) on each side. All the components are placed between these two lines, connecting the two power lines with what look like rungs of a ladder - thus the name, ladder diagram. The letter symbols in the diagram are defined in succeeding paragraphs.

Relay ladder diagrams are universally understood in industry, whether in the process industry, in manufacturing, on the assembly line, or inside electric appliances and products. Any new product increases its chances of success if it capitalizes on widely held concepts. Thus, the PC's ladder diagram language was a logical choice.

Some mention should be made at this point about electrical and electronics symbol designations. In general, there is a difference between electrical and electronics symbols and symbol designations. These two industries grew up somewhat independent of each other, and therefore differences exist. For example, the electronics symbol for a resistor is a zigzag line with a symbol designation of ri The same symbol in the electrical or industrial world is a rectangle with lines out the ends and with a symbol designation of 1R. These differences can sometimes be confusing. In this chapter, we will use the industrial symbols and symbol designations because the programmable controller developed as an industrial machine.

Now, let us follow the series of events for the full control cycle the automatic mixer process. At the start of the process, the start push button (1PB) is pressed. The start button energizes a control relay (1CR) located in the start/stop switch box.

It is located in the first line, or rung, of the ladder diagram. They are shown in the normally open position (abbreviated NO). The same symbol with a slash drawn through it represents the normally closed (NC) relay contact.

When the relay (1CR) is energized (or pulled in or picked up), these relay contacts change state; in this case, they close. When the 1CR contact under the 1PB switch closes, it allows current to continue through the coil of the 1CR relay, even though the start push button (1PB) is released. This circuit holds the 1CR relay in as long as the power line power is applied, the stop button (2PB) is not pushed, and the timing relay (1TR) has not timed out.

Another 1CR contact is located in the second rung of the ladder diagram. When this 1CR closes, current can flow through solenoid A. Solenoid A is an electromechanical device that is electrically activated to mechanically open a valve, which allows fluid to flow into the tank. Fluid flows because the float switch (1FS) in rung 2 is closed.

When the tank has filled, the float switch (1FS) changes to the filled position. This change de-energizes solenoid A, starts the timer relay, and operates the mixer solenoid (MS).

After the timer has timed out, relay 1TR switches off the mixer and energizer solenoid B, which empties the tank. When the tank is empty, float switch (1FS) shuts off solenoid B and places the system in the ready position for next manual start.

Notice that pressing the start switch (1PB) again one the cycle has started will have no adverse effect on the cycle. This protective logic should be designed into all processes, whether a PC or a computer is used.

 

Programmable Controller

 

In 1978, the National Electrical Manufactures Association (NEMA) released a standard for programmable controllers. This standard was the result of four years of work by a committee made up of representatives from PC manufacturers. NEMA Standard ICS3 – 304, defines a programmable controller as "a digitally operating electronic apparatus which uses a programmable memory for the internal storage of instructions for implementing specific functions such as logic, sequencing, timing, counting, and arithmetic to control, through digital or analog input/output modules, various of machines or processes. A digital computer which is used to perform the functions of a programmable controller is considered to be within this scope. Excluded are drum and similar mechanical type sequencing controllers."

There is a tendency to confuse PCs with computers and programmable process controllers that are used for numeral control and for position control. Numeral and position control are used where a very large number of incremental positions are needed to complete a task. Examples are a lathe and a drilling machine. These tasks are not normally handled well by a PC, although that situation is changing.

What is the difference between a PC and a computer? To start with, all PCs are computers. The PC's block diagram structure is the same as that given in Chapter 11 for the computer. However, not all computers are PCs. The major differences that distinguish a PC from a computer are the PC's ability to operate in harsh environments, its different programming language, and its ease of troubleshooting and maintenance.

Programmable controllers are designed to operate in industrial environments that are dirty, are electrically noisy, have a wide fluctuation in temperatures (0° - 60°C), and have relative humidities of from 0 % to 95 %. Air conditioning, which is generally required for computers, is not required for PCs.

The PC's programming language has been, by popular demand, the ladder diagram with standard relay symbology. The reason for the ladder diagram's popularity is that plant personnel are very familiar with relay logic from their previous experience with sequential controls. There are other PC languages in use, however. One language involves Boolean statements relating logical inputs such as and, or, and invert to a single statement output. This language uses instructions such as AND, OR, LOAD, STORE, and so on. Thus type of language is very similar to computer assembly language.

The last major difference between PCs and computers is that of troubleshooting and maintenance. The PC can be maintained by the plant electrician or technician with minimal training. Most of the maintenance is done by replacing modules rather than components. Many times, the PC has a diagnostic program that assists the technician in locating bad modules. Computers, on the other hand, require highly trained electrionics specialists to maintain and troubleshoot them.

 

MICROPROCESSORS

Computers and calculating machines have been around a long time. Probably the earliest calculating machine was the abacus, a hand-operated computing machine using beads on wires. This machine is so old that its origin is only speculative. As technology developed, different calculating machines were invented, all of them mechanical. In 1946, however, the first large-scale electronic calculating machine was produced, the ENIAC. It contained over eighteen thousand vacuum tubes (filling a large room) and required a power supply half the size of the computer itself. The cost was prohibitive, too: over a million dollars to build and hundreds of thousands of dollars to use. Obviously, it was only used to solve very important problems. Since that time, a series of inventions have dramatically reduced the cost of building computer circuits. Today, a $ 10 computing device has essentially the same capabilities as the early ENIAC computer.

There are three generally recognized classes of computers today: the main-frame computer, the minicomputers, and the microcomputer. The differences among these classifications and the point where one class begins and the other leaves off are not well defined. In general, only price and packaging differentiate the three classifications. The minicomputer is less expensive and less powerful than the main frame, and the microcomputer is less expensive and less powerful than the minicomputer. At the same time, there is considerable overlap; the most powerful minicomputer is more powerful than the least powerful main frame, and so on this overlapping situation is compounded by the fact that as technology improves, the capabilities of a lower computer classification above it. Basically, a main frame is a main frame and a minicomputer is a minicomputer because that is what the manufacturer calls it.

There are very few products whose continued development has led to vast improvements in itself and at the same time a vast reduction in its cost; however, the microcomputer is one. It has been said that if the automotive industry had had a similar history, we would now have an automobile that would cost about $1.00 and travel at the speed of sound!

Because of these microcomputer advances, it is not hard to see why there are so many new microcomputer applications. This chapter will discuss some of those applications and related topics such as microcomputer structure, architecture, and families. The microprocessor is becoming increasingly important in industrial electronics as a topic unto itself. Thus, we no longer have the luxury of just knowing something about the microprocessor; now, we must become proficient in ist operation and application.

 

Computer Structure

 

The basic system includes five fundamental units: arithmetic-logic unit, control unit (together, these comprise the central-processing unit), input, output, and memory (storage).

 

Central-processing Unit (CPU)

 

At the center of the computer and the illustration is the central-processing unit (CPU), which is generally defined by its composition. The CPU is composed of two units, the control unit (CU) and the arithmetic-logic unit (ALU).

The function of the arithmetic-logic unit is to perform arithmetic and logic operations on date passing through it. Typical arithmetic functions include addition and subtraction. Logic operations include the logical and, or, exclusive or, not, and shift operations.

The control unit is the center of control for the computer and has the responsibility of sequencing the operation of the entire computer system. The CU is generally associated with the ALU that it controls. And, as mentioned above, the combination of the CU and the ALU is called the central-processing unit. A microprocessor is basically a CPU on a single chip.

The CPU does not need to be implemented as a single component, however, and the CU can be separated from the ALU. Components called bit-slices implement the ALU section of a traditional computer, exclusive of the control section. The CU then must be designed and assembled separately. The bit-slice approach is best used in applications where commercially available microprocessors are inadequate.

 

Memory

 

The main purpose of the memory module is to store a list of instructions called data.

Instructions are binary codes that are recognized by the CU and that cause the microprocessor to perform some procedure. An example of an instruction is the ADD instruction. When the CU receives the code to add, it will cause the numbers to be added to be added to be brought together in the ALU so that the ALU can add them.

A program is a sequence of instructions that has been written by the microprocessor user. A program is generally written in such a way as to implement an algorithm, which is a step-by-step specification of a sequence of operations that will solve some problem. An algorithm can be expressed in any form and in any language (computer language, English language, and so on). An example of an algorithm is a paragraph of directions fo9r getting from my house to your house. A program is an algorithm written in a from the CU can recognize.

Date, which can also be contained in memory, are binary codes that represent numbers or characters. Characters are symbols that represent concepts such as the letter A or the symbol? A common code used to represent these characters (sometimes called alphanumerics) is the American Standard Code for Information Interchange (ASCII). This coding scheme uses eight bits, seven of which represent 128 standard ASCII characters. Bit 8 is a parity bit (discussed in Chapter 10) used for error checking.

Physically, several kinds of memories can be distinguished, depending on whether one can write information into the memory and then read it, or whether one can only read from the memory. Two of these types of memories are random-access memory and read-only memory.

Random-access memory (RAM) is a memory device in which information can be either written or read. The words random access refer to the fast that any location in the memory, called an address, can be accessed (made available for use). Read-only memory (ROM) is a memory that can only be read. The information in ROM was placed there only once, by the manufacturer. In actuality, ROM is also randomly accessible since any location can be accessed, but ROM was the name given to it when it was first produced, and the name has stayed with it ever since. More properly, ROM should be called read-mostly memory since the information has to be placed into the memory at some time, generally by the manufacturer.

Two more types of memory are programmable read-only memory (PROM) and erasable programmable read-only memory (EPROM). These types are subsets of ROM. The major difference between ROM and PROM is that the latter is not programmed at the manufacturer's facilities but at the user's. PROM may be written into only once; fusible links are melted in the device and prevent it from being reprogrammed. EPROM is also programmed by the user, but it can be reprogrammed because it uses different techniques and material than PROM. An EPROM has a quartz window oven the memory chip, the silicon material containing the electronic circuits in the integrated circuit. The quartz window allows ultraviolet light to erase the memory contents. At other times, when it is desired to preserve the program in EPROM, a cover is placed over the window.

Another important term associated with memory is volatility. A volatile memory is one that loses its information when power is removed from the device. RAMs are generally volatile. ROMs are nonvolatile; they do not lose their information when power is removed.

 

Input and Output

 

A keyboard and a sensor (temperature sensor, pressure sensor, light sensor) are examples of input devices. A keyboard is generally the way the program is placed in memory, either by way of the CU or by direct access to the memory module. The process of storing or retrieving information by going directly to the memory and not through the control unit is called direct memory access (DMA). There may also be more than one input module to a system, Such as a keyboard for entering the program and a sensor whose output is used in the program.

The output module displays the information coming from the computer system. Examples of output devices are light-emitting diodes (LEDs), liquid crystal displays (LCDs), and cathode ray tubes (CRTs).

The combination of a CRT and a keyboard is referred to as a terminal. A terminal is used for both putting information into the computer and displaying the output from the computer system. An input-output function, like the terminal, is abbreviated I/O.

 

Busses

 

A bus is a set of signals or wires grouped by function. The modules of a interconnected by means of three buses: the data bus, the address bus, and the control bus. An additional bus, not generally shown but which must be present, is the power bus.

An eight-bit microprocessor requires an eight-bit data bus (or eight separate paths, one for each bit) in order to transmit eight bits of information in parallel. The data bus is bidirectional. The term bidirectional refers to transmission of information in both directions on the bus (not at the same time). Arrowheads on one end only indicate unidirectional (one direction) transmission.

A standard microprocessor address bus has 16 lines and can address 216, or 65,536, different locations. The number 65,536 is referred to as 64K in computer jargon. This name came language is binary (based on powers of 2), and 210, which is equal to 1024, has been traditionally referred to as IK (1000). Since 216 is equal to 26 times 210, or 64 times 210, 216 is incorrectly called 64K.

It carries both status information – signals indicating what operation is in progress – and control information to and from the microprocessor unit. (The standard abbreviation for the microprocessor unit has become MPU.) There are no fixed number of lines required for a control bus.

The power bus is seldom shown in schematics or block diagrams, but it is very necessary. It provides power to all the components of the system.

The preceding discussion has focused on the computer's hardware, the physical components of the system. In contrast with the hardware is the computer software, the programs in memory or on paper and all other instructions and manuals associated with the computer.

 

ARTIFICIAL INTELLIGENCE

Hypothetically, it's a Monday morning: The rather drowsy manager of this corporate department – his name is John – literally stumbles into his office. "Goo'd morning, John." It's a synthesized, but not unpleasant, greeting that comes from the computer on his desk. "Ready to get to work?" – John groans, but the computer is used to this. It knows him pretty well – it knows, for instance, that he'll feel better as soon as he gets enmeshed in the affairs of the day. It immediately reminds him about the report they were putting together on Friday. "I finished it over the weekend," – the computer tells John as the first rush of hot coffee hits the back of his throat. – "I didn't think you'd mind." – Then, as an afterthought... – "But you'd better take a look at it; you know how mechanical my style can be."

The manager nods sleepily. "I'll check it later," he mutters. "How about the schedule?" The computer knows that on most days, John only wants to see the daily schedule, but on Monday morning he likes to see the entire week ahead. It is instantly displayed on the screen. John notices that a big meeting is set for Wednesday with the company's legal staff and decides to begin preparation for that. "What are they going to want to know?" he asks the computer. Without hesitation, the machine begins listing the relevant legal questions, pausing now and then to make sure he's following along. John, after all, is only human.

Almost from the moment digital computers made their appearance in the business world, computer scientists have been lured by the dream of a different kind of computer, one that would emulate the way human beings think rather than merely crunch numbers.

A personal computer that thinks, or does a reasonable imitation thereof, looms as a revolution in productivity. It would radically change the way people do their work in several respects. The most obvious is the computer's "user interface", the way people interact with it. Not only does John-the-manager not have to touch a keyboard, he has no concerns about the syntax of the instructions he gives the machine. It can interpret a vague reference, a grunt and can even anticipate his wishes.

Of even greater significance, however, are the types of tasks a thinking computer could take on. While today's productivity programs usually speed up the job you used to do on paper, tomorrow's promise to let you do things you can't do now. For instance, instead of passively storing data for you to retrieve, an intelligent personal computer could extract the information it thought relevant to a situation-much as a human advisor or consultant would marshall his expertise, even when you don't know enough to ask the right questions. The thinking computer would also have the ability to learn about you and your work, giving it the ability, like any good assis­tant, to do things for you the way you would do them yourself.

Computers that are faster, easier to use and more responsive to the particular needs of their users have long been the promise of the field of artificial intelligence, which is a research area that is now decades old.

In that time it has inspired a number of new programming languages, complex and powerful computer architectures, radical innovations in program development tools, and any number of exciting pilot projects. For all of its promise, though, artificial intelligence-universally known by the acronym AI - has yielded precious little in the way of practical applications.

There are some good reasons to think that situation may be changing. While John-the-manager's ideal desktop machine is still a long way off, new genres of software are beginning to appear that attempt to give the user tools which help him think and let him communicate thoughts to the computer more naturally. Two fields of particular importance in AI research, expert systems and natural language, are providing the inspiration for this movement in software development.

An expert system is a program that takes the place of a human expert by codifying the knowledge and rules he uses to reach his conclusions. The feasibility of expert systems has been demonstrated in a geophysical company to interpret data for oil exploration or several medical systems that analyze symptoms and test results in order to diagnose diseases. Even in the specialized areas where they appear to have demonstrated their practicality, however, expert systems have yet to catch on as reliable or cost-effective alternatives to a human expert.

Natural language has, for the most part, been seen as a technology that's even further away from practicality than expert systems. A computer that would employ natural language would understand English or another human language – not just in the strict rules of grammar and syntax but in the ambiguous shorthand form we use every day. Hopes were raised as early as the 1960s by the demonstration program called Eliza, originally developed at MIT, which could carry on a seemingly natural dialogue while taking the role of a clinical psychologist helping the human user work out his or her problems, hi point of fact, Eliza was based on some clever but un­substantial routines responding to a keyword in the user's statement. Until just the last few years, very little else of practical significance had been done with natural language.

Many individuals in the data processing world, where the imminent arrival of true expert systems and natural language interfaces have long been prophesied, have come to the conclusion that AI will never be more than pie-in-the-sky promises. The personal computer arena, rather than the corporate data processing environment, is where software developers are now turning to AI concepts to help explore ways of making computers more productive. The results are not always products that the AI scientists in universities and research labs would deem to represent true artificial intelligence, but they are real tools that address real needs of computer users.

A new software product from Samna Corp., for instance, called Samna/Dart, interfaces dissimilar computers using different word processing programs via a mainframe host, and allows any user to view and edit documents as if they had all been prepared on the individual's personal computer using his preferred word processor. To do this, it has to interpret the "meaning" of a wide range of formatting codes - what the originator of the document "meant" by inputting the instructions to set up the document in a specific way. "This ability actually borders on artificial intelligence," says Samna President Said Mohammadioun. The eventual "universal" word processor would rely heavily on AI.


Поделиться с друзьями:

Опора деревянной одностоечной и способы укрепление угловых опор: Опоры ВЛ - конструкции, предназначен­ные для поддерживания проводов на необходимой высоте над землей, водой...

Общие условия выбора системы дренажа: Система дренажа выбирается в зависимости от характера защищаемого...

Индивидуальные и групповые автопоилки: для животных. Схемы и конструкции...

Состав сооружений: решетки и песколовки: Решетки – это первое устройство в схеме очистных сооружений. Они представляют...



© cyberpedia.su 2017-2024 - Не является автором материалов. Исключительное право сохранено за автором текста.
Если вы не хотите, чтобы данный материал был у нас на сайте, перейдите по ссылке: Нарушение авторских прав. Мы поможем в написании вашей работы!

0.092 с.