Current developments, challenges and future directions
in Adaptive Resonance Theory.
Adaptive Resonance Theory (ART) has been the foundation upon which an entire family of neuro-fuzzy learning machines, the family of ART architectures, has been developed over the last decade. Since the late 70’s, when Stephen Grossberg addressed the stability-plasticity dilemma via ART, the research in this area has been very fertile producing a powerful variety of function approximators and pattern recognizers. Lately, several ART-based architectures have been proposed and developed in an effort to overcome the shortcomings of their predecessors. All current approaches aim to improve generalization performance and reduce the network’s representation size. One route, which has been followed in the design of ART classifiers, is the distributed
encoding of the data and the subsequent distributed prediction of class membership. Another approach sacrifices the incremental learning characteristics of ART and attempts for an improved representation of the training data via off-line learning. Finally,
ART classifier models based on semi-supervised learning employ a tolerance towards misclassifications to achieve improvements, while maintaining an on-line learning capability. The presentation's goal is to provide the attendees with an overview of ART elements, an insight in current developments and a list of open challenges in the field.
We will describe how the recurrent structure of Random Neural Networks, and their approximation and convergence properties, can be exploited to adaptively control large systems such as packet networks on the one hand, and intricate texture patterns at the other end. The talk will summarize the underlying theory, and present working systems based on these principles.