An Approach to Guaranteeing Generalisation in Neural Networks

Michael Kenneth Weir, J.G. Polhill

Research output: Contribution to journalArticlepeer-review

6 Citations (Scopus)

Abstract

A novel approach to generalisation is presented that is able, under certain circumstances, to guarantee the generalisation to binary-output data for which no targets have been given. The basis of the guarantee is the recognition of a persistent global minimum error solution. An empirical test for whether the guarantee holds is provided which uses a technique called target reversal. The technique employs two neural networks whose convergence using opposing targets signals validity of the guarantee.

Original languageEnglish
Pages (from-to)1035-1048
Number of pages14
JournalNeural Networks
Volume14
Issue number8
DOIs
Publication statusPublished - Oct 2001

Keywords

  • feed-forward neural networks
  • binary outputs
  • guaranteed generalisation
  • global minimum errors
  • target reversal
  • exhaustive search
  • OUT CROSS-VALIDATION
  • FEEDFORWARD NETWORKS
  • COMPLEXITY
  • FRAMEWORK
  • BOUNDS
  • NETS
  • SET

Fingerprint

Dive into the research topics of 'An Approach to Guaranteeing Generalisation in Neural Networks'. Together they form a unique fingerprint.

Cite this