|
An Artificial Neural Network is a network of many very simple processors
("units"), each possibly having a (small amount of) local memory. The units are
connected by unidirectional communication channels ("connections"), which carry
numeric (as opposed to symbolic) data. The units operate only on their local data and on
the inputs they receive via the connections. There are many different types of Neural Networks, each of which has different strengths particular to their applications. The abilities of different networks can be related to their structure, dynamics and learning methods. Neural Networks offer improved performance over conventional technologies in areas which includes: Machine Vision, Robust Pattern Detection, Signal Filtering, Virtual Reality, Data Segmentation, Data Compression, Data Mining, Text Mining, Artificial Life, Adaptive Control, Optimisation and Scheduling, Complex Mapping and more. 2.0 Applications There are abundant materials, tutorials, references and disparate list of demos on the net. This work attempts to compile a list of applications and demos - those that comes with video clips. The applications featured here are:
PS: For those who are only interested in source codes for Neural Networks |
2.1 | CoEvolution of Neural Networks for Control of Pursuit & Evasion |
Dave Cliff & Geoffrey Miller, University of Sussex |
. | |||
The following MPEG movie sequences
illustrate behaviour generated by dynamical recurrent neural network controllers
co-evolved for pursuit and evasion capabilities. From an initial
population of random network designs, successful designs in each generation are selected
for reproduction with recombination, mutation, and gene duplication. Selection is based on
measures of how well each controller performs in a number of pursuit-evasion contests. In
each contest a pursuer controller and an evader controller are pitched against each other,
controlling simple ``visually guided'' 2-dimensional autonomous virtual agents. Both the
pursuer and the evader have limited amounts of energy, which is used up in movement, so
they have to evolve to move economically. Each contest results in a time-series of
position and orientation data for the two agents. These time-series are then fed into a custom 3-D movie generator. It is important to note that, although the chase behaviors are genuine data, the 3D structures, surface physics, and shading are all purely for illustrative effect. |
|
2.2 | Learning the Distribution of Object Trajectories for Event Recognition |
John Neilson & David Hogg, University of Leeds |
. | ||
This research work is about the modelling
of object behaviours using detailed, learnt statistical models.
The techniques being developed will allow models of characteristic object
behaviours to be learnt from the continuous observation of long image sequences.
It is hoped that these models of characteristic behaviours will have a number of uses,
particularly in automated surveillance and event recognition,
allowing the surveillance problem to be approached from a lower level, without the need
for high-level scene/behavioural knowledge. Other possible uses include the random
generation of realistic looking object behaviour for use in Virtual Reality (see Radiosity
for Virtual Reality Systems at Section 2.3), and long-term prediction of object
behaviours to aid occlusion reasoning in object tracking.
|
||
1. The model is learnt in an unsupervised
manner by tracking objects over long image sequences, and is based on a
combination of a neural network implementing Vector Quantisation and a type of neuron with
short-term memory capabilities.
|
2. Models of the trajectories of pedestrians have been generated and used to assess the typicality of new trajectories (allowing the identification of `incidents of interest' within the scene), predict future object trajectories, and randomly generate new trajectories.
|
2.3 | Radiosity for Virtual Reality Systems (ROVER) |
Tralvex Yeap & Graham Birtwistle, University of Leeds |
. | |
The synthesis of actual and
computer generated photo-realistic images has been the aim of artists and graphic
designers for many decades. Some of the most realistic images (see
Graphics Gallery - simulated
steel mill) were generated using radiosity techniques. Unlike ray tracing, radiosity
models the actual interaction between the lights and the environment. In photo realistic
Virtual Reality (VR) environments, the need for quick feedback based on user
actions is crucial. It is generally recognised that traditional implementation of
radiosity is computationally very expensive and therefore not feasible for use in VR
systems where practical data sets are of huge complexity. In the original
thesis, we introduce two new methods and several hybrid techniques to the radiosity
research community on using radiosity in VR applications. On the left column, flyby, walkthrough and a virtual space are first introduced and on the left. On the right, we showcase one of the two novel methods which was proposed using Neural Network technology (download pdf thesis, 123pgs / 2.6mb).
|
|
Introduction to Flyby, Walkthrough and Virtual Space
|
(A) ROVER Learning from Examples Sequence 1 (935kb, Quicktime) Sequence 5 (2mb, Quicktime) Sequence 8 (2.2mb, Quicktime) (B) ROVER Modelling (C) ROVER Prediction |
2.4 | Autonomous Walker & Swimming Eel |
Anders Lansner, Studies of Artificial Neural Systems |
. | ||
(A) The research in this area involves combining biology,
mechanical engineering and information technology in
order to develop the techniques necessary to build a dynamically stable legged
vehicle controlled by a neural network. This would incorporate command signals,
sensory feedback and reflex circuitry in order to produce the desired movement.
|
(B) Simulation of the swimming lamprey (eel-like sea
creature), driven by a neural network.
|
2.5 | Robocup: Robot World Cup |
Neo Say Poh & Tralvex Yeap, Japan-Singapore AI Centre / Kent Ridge Digital Labs |
. |
The RoboCup Competition pits robots (real and virtual)
against each other in a simulated soccer tournament. The aim of the RoboCup competition is
to foster an interdisciplinary approach to robotics and agent-based AI by presenting a
domain that requires large-scale coorperation and coordination in a dynamic,
noisy, complex environment. RoboCup has three different leagues to-date. The Small and Middle-Size Leagues involved physical robots; the Simulation League is for virtual, synthetic teams. This work focus on building softbots for the Simulation League. Machine Learning for Robocup involves:
Common AI methods used are variants of Neural Networks and Genetic Algorithms.
|
|
2.6 | Using HMM's for Audio-to-Visual Conversion |
Ram Rao, Russell Mersereau & Chen Tsuhan, Georgia Institute of Technology / AT&T Research |
. | ||
One emerging application which exploits the
correlation between audio and video is speech-driven facial animation.
The goal of speech-driven facial animation is to synthesize realistic video
sequences from acoustic speech. Much of the previous research has implemented
this audio-to-visual conversion strategy with existing techniques such as vector
quantization and neural networks. Here, they examine how this conversion
process can be accomplished with hidden Markov models (HMM).
|
||
(A) Tracking Demo: The parabolic contour is fit to each
frame of the video sequence using a modified deformable template algorithm.
The height between the two contours, and the width between the corners of the mouth can be
extracted from the templates to form our visual parameter sets.
|
(B) Morphing Demo: Another important piece of the speech-driven facial animation system is a visual synthesis module. Here we are attempting to synthesize the word "wow" from a single image. Each frame in the video sequence is morphed from the first frame shown below. The parameters used to morph these images were obtained by hand.
|
. | ||
Galapagos is a fantastic and dangerous
place where up and down have no meaning, where rivers of iridescent acid and high-energy
laser mines are beautiful but deadly artifacts of some other time. Through spatially
twisted puzzles and bewildering cyber-landscapes, the artificial creature called
Mendel struggles to survive, and you must help him. Mendel is a synthetic organism that can sense infrared radiation and tactile stimulus. His mind is an advanced adaptive controller featuring Non-stationary Entropic Reduction Mapping -- a new form of artificial life technology developed by Anark. He can learn like your dog, he can adapt to hostile environments like a cockroach, but he can't solve the puzzles that prevent his escape from Galapagos.
|
||
Galapagos features rich, 3D
texture-mapped worlds, with continuous-motion graphics and 6 degrees of freedom. Dramatic
camera movement and incredible lighting effects make your passage through Galapagos
breathtaking. Explosions and other chilling effects will make you fear for your synthetic
friend. Active panning 3D stereo sound will draw you into the exotic worlds of Galapagos. |
|
2.8 | Speechreading (Lipreading) |
Günter Mamier, Marco Sommerau & Michael Vogt, Universität Stuttgart |
. | ||||
As part of the research program
Neuroinformatik the IPVR develops a neural speechreading system as part
of a user interface for a workstation. The three main parts of the system include a face
tracker (done by Marco Sommerau), lip modeling and speech processing (done by Michael
Vogt) and the development and application of SNNS for neural network training (done by
Günter Mamier). Automatic speechreading is based on a robust lip image
analysis. In this approach, no special illumination or lip make-up is used. The analysis
is based on true color video images. The system allows for realtime tracking and storage
of the lip region and robust off-line lip model matching. The proposed model is based on
cubic outline curves. A neural classifier detects visibility of teeth edges and
other attributes. At this stage of the approach the edge between the
closed lips is automatically modeled if applicable, based on a neural network's decision.
|
||||
|
|
2.9 | Detection and Tracking of Moving Targets |
Defense Group Incorporated |
. |
The moving target detection and track methods here are
"track before detect" methods. They correlate sensor data versus time and
location, based on the nature of actual tracks. The track statistics are
"learned" based on artificial neural network (ANN) training with prior real or
simulated data. Effects of different clutter backgrounds are partially
compensated based on space-time-adaptive processing of the sensor inputs,
and further compensated based on the ANN training. Specific processing structures are
adapted to the target track statistics and sensor characteristics of interest. Fusion of
data over multiple wavelengths and sensors is also supported. Compared to conventional fixed matched filter techniques, these methods have been shown to reduce false alarm rates by up to a factor of 1000 based on simulated SBIRS data for very weak ICBM targets against cloud and nuclear backgrounds, with photon, quantization, and thermal noise, and sensor jitter included. Examples of the backgrounds, and processing results, are given below. The methods are designed to overcome the weaknesses of other advanced track-before-detect methods, such as 3+-D (space, time, etc.) matched filtering, dynamic programming (DP), and multi-hypothesis tracking (MHT). Loosely speaking, 3+-D matched filtering requires too many filters in practice for long-term track correlation; DP cannot realistically exploit the non-Markovian nature of real tracks, and strong targets mask out weak targets; and MHT cannot support the low pre-detection thresholds required for very weak targets in high clutter. They have developed and tested versions of the above (and other) methods in their research, as well as Kalman-filter probabilistic data association (KF/PDA) methods, which they use for post-detection tracking. Space-time-adaptive methods are used to deal with correlated, non-stationary, non-Gaussian clutter, followed by a multi-stage filter sequence and soft-thresholding units that combine current and prior sensor data, plus feed back of prior outputs, to estimate the probability of target presence. The details are optimized by adaptive "training" over very large data sets, and special methods are used to maximize the efficiency of this training.
|
|
2.10 | Real-time Target Identification for Security Applications |
Stephen McKenna, Queen Mary & Westfield College |
. | ||
The system localises and tracks peoples'
faces as they move through a scene. It integrates the following techniques:
Faces are tracked robustly by integrating motion and model-based tracking.
|
||
(A) Tracking in low resolution and poor lighting conditions
|
(B) Tracking two people simultaneously: lock is maintained on the faces despite unreliable motion-based body tracking.
|
2.11 | Facial Animation |
David Forsey, University of British Columbia |
. |
Facial animations created using hierarchical B-spline as the underlying surface representation. Neural networks could be use for learning of each variations in the face expressions for animated sequences. The (mask) model was created in SoftImage, and is an early prototype for the character "Mouse" in the YTV/ABC televisions series "ReBoot" (They do not use hierarchical splines for Reboot!). The original standard bicubic B-spline was imported to the "Dragon" editor and a hierarchy automatically constructed. The surface was attached to a jaw to allow it to open and close the mouth. Groups of control vertices were then moved around to created various facial expressions. Three of these expressions were chosen as key shapes, the spline surface was exported back to SoftImage, and the key shapes were interpolated to create the final animation. |
2.12 | Behavioral Animation and Evolution of Behavior |
Craig Reynolds, Silicon Graphics |
. | ||
This is a classic experiment (showcase at
Siggraph-1995) and the flocking of ``boids,'' that convincingly bridged the gap between
artificial life and computer animation. Each boid has direct access to the whole scene's geometric description, but reacts only to flockmates within a certain small radius of itself. The basic flocking model consists of three simple steering behaviors:
In addition, the more elaborate behavioral model included predictive obstacle avoidance and goal seeking. Obstacle avoidance allowed the boids to fly through simulated environments while dodging static objects. For applications in computer animation, a low priority goal seeking behavior caused the flock to follow a scripted path.
|
||
|
|
2.13 | A Three Layer Feedforward Neural Network |
Feng Yutao, Duke University |
. | ||
A three layer feedforward neural
network with two input nodes and one output node is trained with backpropagation
using some sample points inside a circle in the 2D plane. The evolution of decision
regions formed during the training are shown in the following MPEG movies. |
||
|
|
2.14 | Artificial Life for Graphics, Animation, Multimedia, and Virtual Reality: Siggraph '95 Showcase |
University of Toronto |
. | |||
Some graphics researchers have begun to
explore a new frontier--a world of objects of enormously greater complexity than is
typically accessible through physical modeling alone--objects that are alive. The modeling
and simulation of living systems for computer graphics resonates with the burgeoning field
of scientific inquiry called Artificial Life. Conceptually, artificial life
transcends the traditional boundaries of computer science and biological science. The
natural synergy between computer graphics and artificial life
can be potentially beneficial to both disciplines. As some of the demos here demonstrate,
potential is becoming fulfillment. The demos demonstrate and elucidate new models that realistically emulate a broad variety of living things--both plants and animals--from lower animals all the way up the evolutionary ladder to humans. Typically, these models inhabit virtual worlds in which they are subject to physical laws. Consequently, they often make use of physics-based modeling techniques. More significantly, however, they must also simulate many of the natural processes that uniquely characterize living systems--such as birth and death, growth, natural selection, evolution, perception, locomotion, manipulation, adaptive behavior, intelligence, and learning. The challenge is to develop sophisticated graphics models that are self-creating, self-evolving, self-controlling, and/or self-animating by simulating the natural mechanisms fundamental to life.
|
|||
|
![]() Evolved Virtual Creatures (10mb, Mpeg) |
![]() Sensor-Based Autonomous Creatures (14mb, Quicktime) |
![]() A.Fish (15mb, Quicktime) |
. | |
Creatures is the most entertaining computer game you'll
ever play which offers nothing to shoot, no puzzles to solve or difficult controls to
master. And yet it is mesmerising entertainment. One have to raise, teach, breed and love computer pets that are really alive. They are so alive that if it is not taken care of, they will die. Creatures features the most advanced, genuine Artificial Life software ever developed in a commercial product, technology that has blown the imaginations of scientists world-wide. This is a look into the future where new species of life emerge from ordinary home and office PCs. |
|
2.16 | Framsticks Artificial Life |
By Maciej Komosinski and Szymon Ulatowski (FAL Website) |
. | |
Framsticks is a three-dimensional life
simulation project. Both physical structure of creatures and
their control systems are evolved. Evolutionary
algorithms are used with selection, crossovers and mutations. Finite elements method is
used for simulation. Both spontaneous and directed
evolutions are possible. "Antelope" attacks "Spider". The broken "Spider" becomes energy/food source. On your right, an MPG movie - 778 KB, 450 frames 320x240 (18 seconds). High quality MPG here 1.71 MB. |
|
3.0 Conclusion
3.1 Past and Present
The development of true Neural Networks is a fairly recent event, which has been met with success. Two of the different systems (among the many) that have been developed are: the basic feedforward Network and the Hopfield Net.
In addition to the applications featured here, other application areas include:
- Financial Analysis -- stock predictions .
- Signature Analysis -- the banks in America have taken to NNs to compare signatures with what is stored.
- Process Control Oversight -- NNs are used to advise aircraft pilots of engine problems.
- Direct Marketing -- NNs can monitor results from a test mailing and determine the most successful areas.
- Pen PC's.
3.2 The Future
The future of Neural Networks is wide open, and may lead to many answers and/or questions. Is it possible to create a conscious machine? What rights do these computers have? How does the human mind work? What does it mean to be human?
|
http://www.cogs.susx.ac.uk/ users/davec/pe.html
|
|
|
http://www.scs.leeds.ac.uk/ neilj/research.html
|
|
|
http://tralvex.com/rover
|
|
|
http://best.nada.kth.se:8080/ sans/jml.cgi/res_demo.jml
|
|
|
http://tralvex.com/robodemo
|
|
|
http://www.ece.gatech.edu/users/ rr/papers/mmsp/paper.html
|
|
|
http://www.anark.com/ Galapagos/info.shtml
|
|
|
http://www.informatik.uni-stuttgart.de/ipvr/ bv/projekte/Neuroinformatik/model.html
|
|
|
http://www.ca.defgrp.com/detect.html
|
|
|
http://www.dcs.qmw.ac.uk/research/ vision/track_people_face.html
|
|
|
http://www.cs.ubc.ca/nest/imager/ contributions/forsey/dragon/anim.html
|
|
|
http://hmt.com/cwr/boids.html
|
|
|
http://www.ee.duke.edu/~yf/bptr.html
|
|
|
http://www.cs.utoronto.ca/ ~dt/siggraph96-course/
|
|
|
http://creatures.wikia.com/wiki/Cyberlife
|
|
|
http://www.frams.poznan.pl/ |
Introduction to Neural Networks (pdf lecture notes) - University of Minnesota
The Data Analysis BriefBook
StatSoft Statistics & Neural Networks (including a 4.7mb e-textbook). STATISTICA Neural Networks
Yahoo NN
Statistics Resource Portal
HTML Text Book
HyperStat Online: Software
NN Using GA
N Computing App Forum
NN Around the World
An tutorial on HMM
Hidden Markov Models for Interactive
Learning of Hand GesturesTutorial on HMMs (LU)
CMU AI: Neural Networks
NN @ Yahoo
Imagination Engines Papers
AI Bibliographies, NN Bibliography, COGANN (GA+NN).
Created on 3 Mar 1998. Last revised on
22 Dec 2006
Tralvex Yeap