Автор: Пользователь скрыл имя, 07 Февраля 2013 в 23:19, аттестационная работа
Neural Networks mimic the pattern of human learning to solve many difficult tasks of data management and pattern recognition. By configuring virtual neural networks that function like the human brain, computers can perform tasks at greater speeds and with increased flexibility of application. These networks are capable of offering invaluable insights into the vast information stockpiles that are common today.
1. Introduction to Neural Networks
1.1 What is a neural network?
1.2 Historical background
1.3 Why use neural networks?
1.4 Neural networks versus conventional computers - a comparison
2. Applications of neural networks
2.1 Neural networks in practice
2.2 Neural networks in medicine
2.2.1 Modelling and Diagnosing the Cardiovascular System
2.2.2 Electronic noses - detection and reconstruction of odours by ANNs
2.2.3 Instant Physician - a commercial neural net diagnostic program
2.3 Neural networks in business
2.3.1 Marketing
2.3.2 Credit evaluation
3. Conclusion
Department of Computer Engineering
Subject: Applied Information Theory
Neural networks
Done by: Gaukhar Zharylgapkyzy,
Dinara Sabazova
2nd year students, FIT, AC
Checked by: Kaimov A.
Almaty 2013
Contents:
1. Introduction to Neural Networks
1.1 What is a neural network?
1.2 Historical background
1.3 Why use neural networks?
1.4 Neural networks versus conventional computers - a comparison
2. Applications of neural networks
2.1 Neural networks in practice
2.2 Neural networks in medicine
2.2.1 Modelling and Diagnosing the Cardiovascular System
2.2.2 Electronic noses - detection and reconstruction of odours by ANNs
2.2.3 Instant Physician - a commercial neural net diagnostic program
2.3 Neural networks in business
2.3.1 Marketing
2.3.2 Credit evaluation
3. Conclusion
1. Introduction to neural networks
Neural Networks mimic the pattern of human learning to solve many difficult tasks of data management and pattern recognition. By configuring virtual neural networks that function like the human brain, computers can perform tasks at greater speeds and with increased flexibility of application. These networks are capable of offering invaluable insights into the vast information stockpiles that are common today.
The human brain is constructed of a vast network of interconnected entities. These entities function together to enable us learn and to perform a diverse array of tasks. The neuron is responsible for this learning process and it is made up of three main parts the dendrites, the soma, and the axon. The dendrites form the input network that consists of branches, which connect to tens of thousands of other neurons. This interconnectedness is what determines our adaptability and creativity. The next element of the neuron is the Soma. This is the processing element that determines at which threshold the neuron will respond. A constant flow of chemical signals causes stimulation of an output once the integrated signal reaches a certain threshold value. When a signal is finally generated, it is conducted down the Axon, and then continues to another dendrite or to the muscle cells. (“Nervous System”, Encyclopedia Britannica)
During the learning process, adjustments to the response threshold values are made. These adjustments cause the Soma to become more excitable. This will allow the neuron to generate an output signal at lower integrated values of chemical stimulation. As we become more familiar with certain tasks, a lower level of excitation is exhibited. This lower level of excitation allows tasks to become “second nature”. Therefore tasks require less effort to perform. (Neurotransmission, Department of Psychology)
Our knowledge of how we learn is somewhat limited, but similar principles are found in biological neural networks. These can be used in artificial networks as well. As in the biological counterparts, the artificial networks have processing elements like the neuron. Neurons have interconnected pathways for inputs, similar to the dendrites. The neurons of the artificial networks also use a summation, or integral process, which determines the output threshold of the individual element. Each element is one of many that are arranged in layers, which are connected to numerous others within the layer. They are also connected to the other elements that occupy different layers. By this arrangement, a highly interconnected array of neurons is formed. This increases the flexibility of the system and allows it to closely mimic the capabilities of their biological counterparts. (Neural Networks, Christos Stergiou and Dimitrios Siganos)
There are two main methods of learning, supervised and unsupervised. These learning processes are similar to the biological model. In the supervised method, the system is informed of what the outcome should be based on the input values. The system then processes the inputs in order to create the same output with an acceptable error margin. If the desired output is not created, the system goes back to the interconnected weights of the processing elements. Here, adjustments are made until the error is acceptable. The adjustments are made to the integrating equations, which determine the excitability of an element within the network. This method of learning is used to create a network that can generate models when there are a vast number of input variables to be evaluated. The other method is to allow the system to make adjustments without an output model to compare against. It will make the necessary adjustments as needed to help the system discover patterns and inter-relations within the input data. (Artificial Neural Networks, Daniel Klerfors)
Many large corporations have begun to use neural networks in a wide range of applications. Medical institutions have started investigating their benefits in areas such as the complicated nature of diagnosis of patients. By inputting various symptoms, the computer would research the vast number of possible afflictions. This would greatly assist in the diagnosis and severity of illnesses. Financial Instructions are also researching various applications. These include, stock market forecasting, assisting in fraud detection, and in foreign market trend analysis. Research is also being done to possibly use neural network software in optical character recognition of cursive handwriting. (Applications of Artificial Neural Networks, Letze Anderung)
The potential uses of neural network technology are a widely diversified market with many possibilities. I believe that as our understanding of biological networks and learning increases, artificial neural networks will continue to advance. The potential of such a system is virtually limitless. These systems are not confined to being controlled by algorithms, as are typical computers. As a result, these systems are not limited in their numerous possibilities of applications. In my opinion, the highly intricate nature of nonlinear mathematics and the increased complexity of ever-growing numbers of interconnected processing elements, will make the evolution of artificial networks difficult but extremely functional.
1.1 What is a Neural Network?
An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurones) working in unison to solve specific problems. ANNs, like people, learn by example. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurones. This is true of ANNs as well.
1.2 Historical background
Neural network simulations appear to be a recent development. However, this field was established before the advent of computers, and has survived at least one major setback and several eras.
Many importand advances have been boosted by the use of inexpensive computer emulations. Following an initial period of enthusiasm, the field survived a period of frustration and disrepute. During this period when funding and professional support was minimal, important advances were made by relatively few reserchers. These pioneers were able to develop convincing technology which surpassed the limitations identified by Minsky and Papert. Minsky and Papert, published a book (in 1969) in which they summed up a general feeling of frustration (against neural networks) among researchers, and was thus accepted by most without further analysis. Currently, the neural network field enjoys a resurgence of interest and a corresponding increase in funding.
The history of neural networks that was described above can be divided into several periods:
First Attempts: There were some initial simulations using formal logic. McCulloch and Pitts (1943) developed models of neural networks based on their understanding of neurology. These models made several assumptions about how neurons worked. Their networks were based on simple neurons which were considered to be binary devices with fixed thresholds. The results of their model were simple logic functions such as "a or b" and "a and b". Another attempt was by using computer simulations. Two groups (Farley and Clark, 1954; Rochester, Holland, Haibit and Duda, 1956). The first group (IBM reserchers) maintained closed contact with neuroscientists at McGill University. So whenever their models did not work, they consulted the neuroscientists. This interaction established a multidiscilinary trend which continues to the present day.
Promising & Emerging Technology: Not only was neroscience influential in the development of neural networks, but psychologists and engineers also contributed to the progress of neural network simulations. Rosenblatt (1958) stirred considerable interest and activity in the field when he designed and developed the Perceptron. The Perceptron had three layers with the middle layer known as the association layer.This system could learn to connect or associate a given input to a random output unit.
Another system was the ADALINE (ADAptive LInear Element) which was developed in 1960 by Widrow and Hoff (of Stanford University). The ADALINE was an analogue electronic device made from simple components. The method used for learning was different to that of the Perceptron, it employed the Least-Mean-Squares (LMS) learning rule.
Period of Frustration & Disrepute: In 1969 Minsky and Papert wrote a book in which they generalised the limitations of single layer Perceptrons to multilayered systems. In the book they said: "...our intuitive judgment that the extension (to multilayer systems) is sterile". The significant result of their book was to eliminate funding for research with neural network simulations. The conclusions supported the disenhantment of reserchers in the field. As a result, considerable prejudice against this field was activated.
Innovation: Although public interest and available funding were minimal, several researchers continued working to develop neuromorphically based computaional methods for problems such as pattern recognition.
During this period several paradigms were generated which modern work continues to enhance.Grossberg's (Steve Grossberg and Gail Carpenter in 1988) influence founded a school of thought which explores resonating algorithms. They developed the ART (Adaptive Resonance Theory) networks based on biologically plausible models. Anderson and Kohonen developed associative techniques independent of each other. Klopf (A. Henry Klopf) in 1972, developed a basis for learning in artificial neurons based on a biological principle for neuronal learning called heterostasis.
Werbos (Paul Werbos 1974) developed and used the back-propagation learning method, however several years passed before this approach was popularized. Back-propagation nets are probably the most well known and widely applied of the neural networks today. In essence, the back-propagation net. is a Perceptron with multiple layers, a different thershold function in the artificial neuron, and a more robust and capable learning rule.
Amari (A. Shun-Ichi 1967) was involved with theoretical developments: he published a paper which established a mathematical theory for a learning basis (error-correction method) dealing with adaptive pattern classification. While Fukushima (F. Kunihiko) developed a step wise trained multilayered neural network for interpretation of handwritten characters. The original network was published in 1975 and was called the Cognitron.
Re-Emergence: Progress during the late 1970s and early 1980s was important to the re-emergence on interest in the neural network field. Several factors influenced this movement. For example, comprehensive books and conferences provided a forum for people in diverse fields with specialized technical languages, and the response to conferences and publications was quite positive. The news media picked up on the increased activity and tutorials helped disseminate the technology. Academic programs appeared and courses were introduced at most major Universities (in US and Europe). Attention is now focused on funding levels throughout Europe, Japan and the US and as this funding becomes available, several new commercial with applications in industry and financial institutions are emerging.
Today: Significant progress has been made in the field of neural networks-enough to attract a great deal of attention and fund further research. Advancement beyond current commercial applications appears to be possible, and research is advancing the field on many fronts. Neurally based chips are emerging and applications to complex problems developing. Clearly, today is a period of transition for neural network technology.
For The first artificial neuron was produced in 1943 by the neurophysiologist Warren McCulloch and the logician Walter Pits. But the technology available at that time did not allow them to do too much.
1.3 Why use neural networks?
Neural networks, with their remarkable ability to
derive meaning from complicated or imprecise data, can be used to extract
patterns and detect trends that are too complex to be noticed by either
humans or other computer techniques. A trained neural network can be
thought of as an "expert" in the category of information it
has been given to analyze. This expert can then be used to provide projections
given new situations of interest and answer "what if" questions.
Other advantages include:
1.4 Neural networks versus conventional computers
Neural networks take a different approach to problem solving than that of conventional computers. Conventional computers use an algorithmic approach i.e. the computer follows a set of instructions in order to solve a problem. Unless the specific steps that the computer needs to follow are known the computer cannot solve the problem. That restricts the problem solving capability of conventional computers to problems that we already understand and know how to solve. But computers would be so much more useful if they could do things that we don't exactly know how to do.
Neural networks process information in a similar way the human brain does. The network is composed of a large number of highly interconnected processing elements(neurons) working in parallel to solve a specific problem. Neural networks learn by example. They cannot be programmed to perform a specific task. The examples must be selected carefully otherwise useful time is wasted or even worse the network might be functioning incorrectly. The disadvantage is that because the network finds out how to solve the problem by itself, its operation can be unpredictable.
On the other hand, conventional computers use a cognitive approach to problem solving; the way the problem is to solved must be known and stated in small unambiguous instructions. These instructions are then converted to a high level language program and then into machine code that the computer can understand. These machines are totally predictable; if anything goes wrong is due to a software or hardware fault.
Neural networks and conventional algorithmic computers are not in competition but complement each other. There are tasks are more suited to an algorithmic approach like arithmetic operations and tasks that are more suited to neural networks. Even more, a large number of tasks, require systems that use a combination of the two approaches (normally a conventional computer is used to supervise the neural network) in order to perform at maximum efficiency.
Given this description of neural networks and how they work, what real world applications are they suited for? Neural networks have broad applicability to real world business problems. In fact, they have already been successfully applied in many industries.
Since neural networks are best at identifying patterns or trends in data, they are well suited for prediction or forecasting needs including:
sales forecasting
industrial process control
customer research
data validation
risk management
target marketing
But to give you some more specific examples; ANN are also used in the following specific paradigms: recognition of speakers in communications; diagnosis of hepatitis; recovery of telecommunications from faulty software; interpretation of multimeaning Chinese words; undersea mine detection; texture analysis; three-dimensional object recognition; hand-written word recognition; and facial recognition.
2.2 Neural networks in medicine
Artificial Neural Networks (ANN) are currently a 'hot' research area in medicine and it is believed that they will receive extensive application to biomedical systems in the next few years. At the moment, the research is mostly on modeling parts of the human body and recognizing diseases from various scans (e.g. cardiograms, CAT scans, ultrasonic scans, etc.).
Neural networks are ideal in recognizing diseases using scans since there is no need to provide a specific algorithm on how to identify the disease. Neural networks learn by example so the details of how to recognize the disease are not needed. What is needed is a set of examples that are representative of all the variations of the disease. The quantity of examples is not as important as the 'quantity'. The examples need to be selected very carefully if the system is to perform reliably and efficiently.
Neural Networks are used experimentally to model the human cardiovascular system. Diagnosis can be achieved by building a model of the cardiovascular system of an individual and comparing it with the real time physiological measurements taken from the patient. If this routine is carried out regularly, potential harmful medical conditions can be detected at an early stage and thus make the process of combating the disease much easier.
A model of an individual's cardiovascular system must mimic the relationship among physiological variables (i.e., heart rate, systolic and diastolic blood pressures, and breathing rate) at different physical activity levels. If a model is adapted to an individual, then it becomes a model of the physical condition of that individual. The simulator will have to be able to adapt to the features of any individual without the supervision of an expert. This calls for a neural network.
Another reason that justifies the use of ANN technology, is the ability of ANNs to provide sensor fusion which is the combining of values from several different sensors. Sensor fusion enables the ANNs to learn complex relationships among the individual sensor values, which would otherwise be lost if the values were individually analysed. In medical modelling and diagnosis, this implies that even though each sensor in a set may be sensitive only to a specific physiological variable, ANNs are capable of detecting complex medical conditions by fusing the data from the individual biomedical sensors.
ANNs are used experimentally to implement electronic noses. Electronic
noses have several potential applications in telemedicine. Telemedicine
is the practice of medicine over long distances via a communication
link. The electronic nose would identify odours in the remote surgical
environment. These identified odours would then be electronically transmitted
to another site where an door generation system would recreate them.
Because the sense of smell can be an important sense to the surgeon,
telesmell would enhance telepresent surgery.
An application developed in the mid-1980s called the "instant physician" trained an autoassociative memory neural network to store a large number of medical records, each of which includes information on symptoms, diagnosis, and treatment for a particular case. After training, the net can be presented with input consisting of a set of symptoms; it will then find the full stored pattern that represents the "best" diagnosis and treatment.
Business is a diverted field with several general areas of specialization
such as accounting or financial analysis. Almost any neural network
application would fit into one business area or financial analysis.
There is some potential for using neural networks for business purposes,
including resource allocation and scheduling. There is also a strong
potential for using neural networks for database mining, that is, searching
for patterns implicit within the explicitly stored information in databases.
Most of the funded work in this area is classified as proprietary. Thus,
it is not possible to report on the full extent of the work going on.
Most work is applying neural networks, such as the Hopfield-Tank network
for optimization and scheduling.
There is a marketing application which has been integrated with a neural network system. The Airline Marketing Tactician (a trademark abbreviated as AMT) is a computer system made of various intelligent technologies including expert systems. A feedforward neural network is integrated with the AMT and was trained using back-propagation to assist the marketing control of airline seat allocations. The adaptive neural approach was amenable to rule expression. Additionaly, the application's environment changed rapidly and constantly, which required a continuously adaptive solution. The system is used to monitor and recommend booking advice for each departure. Such information has a direct impact on the profitability of an airline and can provide a technological advantage for users of the system. [Hutchison & Stephens, 1987]
While it is significant that neural networks have been applied to this problem, it is also important to see that this intelligent technology can be integrated with expert systems and other approaches to make a functional system. Neural networks were used to discover the influence of undefined interactions by the various variables. While these interactions were not defined, they were used by the neural system to develop useful conclusions. It is also noteworthy to see that neural networks can influence the bottom line.