Applicability of Artificial Intelligence in Different Fields of Life


By
Prem Parashar
Senior Lecturer
Regional Institute of Management & Technology
Mandi Gobindgarh (Punjab)
E-mail :
prem_parashar@yahoo.com
Website :
www.premparashar.tripod.com
 


 The main purpose of this paper is to highlight the features of Artificial Intelligence(AI), how it was developed, and some of its main applications.


What is Artificial Intelligence?

"Artificial intelligence is the study of ideas to bring into being machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment and intention. Each such machine should engage in critical appraisal and selection of differing opinions within itself. Produced by human skill and labor, these machines should conduct themselves in agreement with life, spirit and sensitivity, though in reality, they are imitations."

Development of Artificial Intelligence

The field of artificial intelligence is relatively young. The creation of Artificial Intelligence as an academic discipline can be traced to the 1950s, when scientists and researchers began to consider the possibility of machines processing intellectual capabilities similar to those of human beings. Alan Turing, a British mathematician, first proposed a test to determine whether or not a machine is intelligent. The test later became known as the Turing Test, in which a machine tries to disguise itself as a human being in an imitation game by giving human-like responses to a series of questions. Turing believed that if a machine could make a human being believe that he or she is communicating with another human being, then the machine can be considered as intelligent as a human being.

The term "artificial intelligence" itself was created in 1956 by a professor of Massachusetts Institute of Technology, John McCarthy. McCarthy created the term for a conference he was organizing that year. The conference, which was later called the Dartmouth Conference by AI researchers, established AI as a distinct discipline. The conference also defined the major goals of AI: to understand and model the thought processes of humans and to design machines that mimic this behavior.

Much of the AI research in the period between 1956 and 1966 was theoretical in nature. The very first AI program, the Logic Theorist (presented at the Dartmouth Conference ) was able to prove mathematical theorems. Several other programs were later on developed by taking the advantage of AI such as "Sad Sam,"( written by Robert K. Lindsay in 1960 ) that understood simple English sentences and was capable of drawing conclusions from facts learned in a conversation . The conclusions drawn depend on the data which is called knowledge Base(KB) in AI.

 Another was ELIZA, a program developed in 1967 by Joseph Weizenbaum at MIT that was capable of simulating the responses of a therapist to patients. With more and more successful demonstrations of the feasibility of AI, the focus of AI research shifted. Researchers turned their attention to solving specific problems in areas of possible AI application. This shift in research focus gave rise to the present-day definition of AI, that is, "a variety of research areas concerned with extending the ability of the computer to do tasks that resemble those performed by human beings," as V. Daniel Hunt puts it in his 1988 article "The Development of Artificial Intelligence" (Andriole 52). Some of the most interesting areas of current AI research include expert systems, neural networks, and robotics.

Expert Systems

The first area of AI application we explore is expert systems, which are AI programs that can make decisions which normally require human level of expertise. A program called DENDRAL, developed at the Stanford Research Institute in 1965, was the grandparent of expert systems. Much like a human chemist, it could analyze information about chemical compounds to determine their molecular structure. A later program called MYCIN was developed in the mid-1970s and was capable of helping physicians in diagnosis of bacterial infections. It is often referred to as the first true expert system.


Expert systems are perhaps the most easily implemented and most widely used AI technology. Although the effects of such systems may not be readily apparent, they have had a tremendous impact on our lives. In fact, many of the computer programs we use today can be considered expert systems. The spell-checking utility in our word processor is an expert system. It takes the role of a proofreader by reading a group of sentences, checking them against the known spelling and grammatical rules, and making suggestions of possible corrections to the writer. Expert systems, combined with robotics, brought about automation of the manufacturing process   which accelerated production rate and reduced error. A typical assembly line that required hundreds of people in the 1950s now only requires ten to twenty people who supervise the expert systems that do the job. The pioneers in industrial automation are Japanese automobile manufacturers such as Toyota and Honda, with up to 80% automation of the manufacturing process.

The most advanced expert systems, like many other advanced technologies, are used extensively in military applications. An example is the next generation fighter plane of the U.S. Air Force -- the F-22 Raptor. The targeting computer onboard the Raptor takes the role of a radar controller by interpreting radar signals, identifying a target, and checking its radar signature against known enemy types stored in its database. 

Neural Networks

Another area of great interest is neural networks, which implement the ability to learn into a computer program. The ability to make connections between facts and draw conclusions is central to learning. Humans rely on what we call common sense to make such connections. However, something that is common sense to us may be very difficult to implement in a computer program. One such common sense case is making a causal connection; as Charles L. Oritz Jr. wrote, "The occurrence of an event is never an isolated matter. An event owes its existence to other events which causally precede it; an event's presence is, in turn, felt by certain collections of subsequent events" (Artificial Intelligence Volume 111, p.73). Each node in a neural network must be able to take a number of inputs, process them to determine the connections that need to be made, and send outputs to the relevant nodes determined in the previous step. Each processing element in a neural network receives a number of inputs and determines to which processing elements it should send the input, and outputs the processed data to those processing elements, much like a human neuron does.

The aforementioned "Sad Sam" program is an example of the principles of a neural network in action, though it is primitive and works with limited input. Sam is capable of drawing a conclusion from known facts, given the sentences: "Jim is John's brother" and "Jim's mother is Mary," Sad Sam was smart enough to understand that Mary must therefore be John's Mother (ai.about.com). While it is relatively easy to let a program make connections among a limited set of information, there are innumerable connections that can be made about things in the real world. The huge number of connections that can be made in the real world makes implementation of sophisticated neural networks a daunting task. A spin-off of the neural network problem is the fuzzy logic problem, which deviates from traditional yes-or-no type of Boolean logic. In fuzzy logic, values are no longer discrete and mutually exclusive; that is, a value can belong to two categories simultaneously. An example is when one talks about temperature: ninety degrees fahrenheit is "hot" when one is talking about outdoor temperature, but for body temperature, it is abnormally "cold." Through the implementation of fuzzy logic, a neural network would be able to make that same judgment.

There are still many problems in neural network research, including creating algorithms to make the connections, to determine which sets of data should be connected, and even to abandon irrelevant data when necessary. Miscellaneous aspects of the human learning process can present challenges with the implementation of a neural network. The complexity of these problems is the reason why there remains much theoretical work to be done in the field. While a complete set of solutions lies beyond the scope of the theories and technology currently available, the principles and partial solutions of the problem have been implemented with great success. Deep Blue, the chess-playing program developed by IBM, is one of the few examples of an application of neural networking principles. It was capable of learning from previous games and predicting the possible moves of an opponent. As our understanding of the human brain and the learning process grows, so will our ability to create more effective algorithms of learning and making connections among known ideas.

AI in Robotics

Robotics is the area of AI technology most attractive to the public. In fact, robotics could be the area where AI can be most beneficial to mankind. The use of industrial robots that do repetitive tasks accurately has already increased the productivity of assembly lines in manufacturing plants. The addition of artificial intelligence to these industrial robots could further boost their productivity by allowing them to do a wider variety of tasks and to do so more efficiently. In the future, nano-robots not much bigger than a will be able to enter the human body, repair damaged organs, and destroy bacterium and cancer tissues. Special-purpose robots such as bomb-defusing robots and space exploration robots can go into hostile environments and accomplish tasks deemed too dangerous for humans.

While the benefit of robots with AI is great, there are numerous technical hurdles encountered when implementing AI in a robot, many of which are being researched today. A robot must be capable of perception in order to interact with the world around it. The ability to see, hear, and touch can be implemented through cameras, infrared and ultrasound sensors, collision sensors, and other devices. While implementing these physical sensors is relatively simple, making the robot make sense of this information can be quite difficult.

A robot called SHRDLU that can see and stack boxes on a table and even answer questions about objects on the table. Such a robot was truly a breakthrough, for it not only was able to see three dimensional objects but also had a basic understanding of physics and was able to use this knowledge to accomplish work on its own. However, one must not forget that these robots can only operate in a limited environment with a few stationery geometric objects, which the researchers called "the micro-blocks world" (ai.about.com). The real world is far more complex, as it contains far more dynamic objects.

Conclusion

The field of artificial intelligence is truly a fascinating one. Like many other new technologies, AI is changing our lives everyday. It is quite possible that the near future will bring intelligent machines to make life more convenient and comfortable for all of us. Although some may argue otherwise, there is no need to fear artificial intelligence. Like all other machines, AI machines do what human programmers tell them to do. There is, however, a need to understand AI, for it is through understanding that we can make the AI technology most beneficial.

While expert systems can be extremely helpful to human beings, there are tasks that current expert systems simply cannot accomplish. To return to our past example, the spell-checking utility can check mechanics of an article. However, it cannot check all important aspects of an article such as content and logic. Thus, it is only a marginally helpful proofreader. It would be a much more competent proofreader if it could identify logical shortcomings and so on. To do so, an expert system must be able to make cognitive connections between objects.

Additional Readings:

Elaine Rich, Kevin Knight 1991, " Artificial Intelligence".
W. Patterson " An Introduction to Artificial Intelligence and Expert System".
 


Prem Parashar
Senior Lecturer
Regional Institute of Management & Technology
Mandi Gobindgarh (Punjab)
E-mail :
prem_parashar@yahoo.com
Website :
www.premparashar.tripod.com
 

Source : E-mail May 5, 2004

 

B A C K

 

Important Note :
Site Best Viewed in Internet
Explorer in 1024x768 pixels
Browser text size: Medium