Improving Learning Performance in Neural Networks

AUTHORS

Falah Al-akashi,Faculty of Engineering, University of Kufa, Najaf, Iraq

ABSTRACT

In this paper, we propose an optimization framework for a robust deep learning algorithm using the influences of noisy recurring on artificial neural networks. Influences between nodes in the neural network remain very steady in the convergence towards a superior node even with several types of noises or rouges. Several characteristicss of noisy data sources have been used to optimize the observations in a group of neural networks during their learning process. While the standard network learns to emulate those around, it does not distinguish between professional and nonprofessional exemplars. A Collective system can accomplish and address such difficult tasks in both static and dynamic environments without using some external controls or central coordination. We will show how the algorithm approximates gradient descent of the expected solutions produced by the nodes in the space of pheromone trails. Positive feedback helps individual nodes to recognize and hone their skills, and covering their solution optimally and rapidly. Our experiment results showed how long-run disruption in the learning algorithm can successfully move towards the process that accomplishes favorable outcomes. Our results are comparable to and better than those proposed by other models considered significant, e.g., “large step Markov chain” and other local search heuristic algorithms.

 

KEYWORDS

Reinforcement learning, Convergence analysis, Deep learning, Features generation, Unsupervised learning, Robotics, Dynamic learning

ISSUE INFO

  • Volume 1, No. 2, 2021
  • ISSN(e):2653-309X
  • Published:Dec. 2021