Abstract: | Inductive Inference Learning can be described in terms of finding a good approximation to some unknown classification rule f, based on a pre-classified set of training examples $\langle$x,f(x)$\rangle.$ One particular class of learning systems that has attracted much attention recently is the class of neural networks. But despite the excitement generated by neural networks, learning in these systems has proven to be a difficult task. In this thesis, we investigate different ways and means to overcome the difficulty of training feedforward neural networks. Our goal is to come up with efficient learning algorithms for new classes (or architectures) of neural nets. In the first approach, we relax the constraint of fixed architecture adopted by most neural learning algorithms. We describe two constructive learning algorithms for two-layer and tree-like networks. In the second approach, we adopt the "probably approximately correct" (PAC) learning model and we look for positive learnability results by restricting the distribution generating the training examples, the connectivity of the networks, and/or the weight values. This enables us to identify new classes of neural networks that are efficiently learnable in the chosen setting. In the third and final approach, we look at the problem of learning in neural networks from the average case point of view. In particular, we investigate the average case behavior of the well known clipped Hebb rule when learning different neural networks with binary weights. The arguments given for the "efficient learnability" range from extensive simulations to rigorous mathematical proofs. |