Improving Learning Performance in Neural Networks

AUTHORS

Falah Al-akashi,Faculty of Engineering, University of Kufa, Najaf, Iraq

ABSTRACT

In this paper, we propose an optimization framework for robust deep leaning algorithm using the influences of noisy recurring on artificial neural network. Influences between nodes in neural network remain very steady in the convergence towards a superior node even with several types of noises or rouges. Several characteristics of noisy data sources have been used to optimize the observations in a group of neural networks during their learning process. While the standard network learns to emulate those around, it does not distinguish between professional and nonprofessional exemplars. Collective system is able to accomplish and address such difficult task in both static and dynamic environments without using some external controls or central coordination. We will show how the algorithm approximates gradient descent of the expected solutions produced by the nodes in the space of pheromone trails. Positive feedback helps individual nodes to recognize and hone their skills, and covering their solution optimally and rapidly. Our experiment results showed how long-run disruption in the learning algorithm can successfully move towards the process that accomplishes favorable outcomes. Our results are comparable to and better than those proposed by other models considered significant, e.g., “large step Markov chain” and other local search heuristic algorithms.

 

KEYWORDS

Reinforcement learning, Convergence analysis, Deep learning, Features generation, Unsupervised leaning, Robotics, Dynamic learning

REFERENCES

[1]     H., Ackley and L., Littman, “Generalization and scaling in reinforcement learning,” In Touretzky, D. S. (Ed.), Advances in Neural Information Processing Systems 2, San Mateo, CA. Morgan Kaufmann, pp.550-557, (1990)
[2]     L. jing and Y. Tian, “Self-supervised visual feature learning with deep neural networks: A survey,” arXiv:1902.06162, (2019)
[3]     Z. Ghahramani, “Unsupervised learning,” In: Bousquet O., Von Luxburg U., Rätsch G. (eds) Advanced Lectures on Machine Learning. ML 2003. Lecture Notes in Computer Science, vol.3176. Springer, Berlin, Heidelberg. (2004) DOI: 10.1007/978-3-540-28650-9_5(CrossRef)(Google Scholar)
[4]     D. Helmbold and R. Schapire, “Predicting nearly as well as the best pruning of a decision tree. Machine learning 27,” pp51-68, (1997), DOI: 10.1023/A:1007396710653(CrossRef)(Google Scholar)
[5]     J. Kennedy and R. Eberhart, “Swarm intelligence,” A book, Morgan Kaufmann Publishers Inc. 340 Pine Street, Sixth Floor San Francisco, CA, United States, ISBN:978-1-55860-595-4, (2001)
[6]     L. Ke and M. Jitendra, “Learning to optimize,” arXiv:1606.01885, (2016)
[7]     L. Ke and M. Jitendra, “Learning to optimize neural nets,” arXiv:1703.00441v2, (2017)
[8]     H. Yao, W. Xian, Z. Tao, Y. Li, B. Ding, R. Li, and Z. Li, “Automated relational meta-learning,” in Proceedings of ICLR 2020 Conference, (2020)
[9]     T. Hospedales, A. Antoniou, P. Micaelli, and A. Storkey, “Meta-learning in neural networks: A survey,” arXiv:2004.05439, (2020)
[10]  M. Andrychowicz, M. Denil, S. Gomez, M. Hoffman, D. Pfau, T. Schaul, B. Shillingford, and N. Freitas, “Learning to learn by gradient descent,” Neural and Evolutionary Computing, arXiv preprint arXiv:1606.04474, (2016)
[11]  K. Li and J. Malik, “Learning to optimize,” CoRR,abs/1606.01885, (2016)
[12]  D. Kingma and A. Jimmy, “A method for stochastic optimization,” arXiv preprintarXiv:1412.6980, (2014)
[13]  J. Duchi, E. Hazan, and Y. Singer, “Adaptive sub-gradient methods for online learning and stochastic optimization,” Journal of Machine Learning Research, vol.12(Jul) pp.2121-2159, (2011)
[14]  T. Tieleman and G. Hinton, “Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude,” COURSERA: Neural Networks for Machine Learning, vol.4, no.2, (2012)
[15]  E. Bonabeau, M. Doringo, and G. Theraulaz, “Swarm intelligence: From natural to artificial systems,” New York: Oxford University Press, (1999)
[16]  R. Caruana, D. Silver, J. Baxter, T. Mitchell, L. Pratt, and S. Thrun. “Learning to learn: Knowledge consolidation and transfer in inductive systems,” NIPS workshop on Learning to Learn: Knowledge Consolidation and Transfer in Inductive Systems, (1995)
[17]  S. Thrun and L. Pratt, “Learning to learn,” Springer Science & Business Media, (2012)
[18]  P. Brazdil, G. Carrier, C. Soares, and R. Vilalta, “Meta learning: Applications to data mining,” Springer Science and Business Media, Springer Verlag, Berlin Heidelberg, pp.17-42, (2008)
[19]  C. Blum and A. Roli, “Meta heuristics in combinatorial optimization: Overview and conceptual comparison,” ACM Computing Surveys (CSUR), vol.35, no.3, pp.268-308, (2003)
[20]  J. Schmidhuber, “Optimal ordered problem solver,” Machine Learning, vol.54, no.3, pp.211-254, (2004)
[21]  S. Hochreiter, S. Younger, and P. Conwell, “Learning to learn using gradient descent,” in International Conference on Artificial Neural Networks, pp.87-94, Springer, (2001)
[22]  K. Kishor and P. Suresh, “GCS technique to improve the performance of neural networks,” Journal of Intelligent Systems, vol.29, no.1, DOI: 10.1515/jisys-2017-0423, (2019)(CrossRef)(Google Scholar)
[23]  J. Travis, T. Desell, S. Clachar, J. Higgins, and B. Wild, “Evolving deep recurrent neural networks using ant colony optimization,” 15th European Conference on Evolutionary Computation in Combinatorial Optimization, pp.235, ( 2015), DOI: 10.1007/978-3-319-16468-7_8(CrossRef)(Google Scholar)
[24]  H. Bottee and E. Bonabeau, “Evolving ant colony optimization,” Advanced Complex System., vol.1, no.2/3, pp.149-159, (1998)
[25]  S. Haykin, “Neural networks: A comprehensive foundation,” Prentice Hall PTR, Upper Saddle River, New Jersey, United States, (1998), ISBN: 978-0-13-273350-2
[26]  G. Kevin, “An introduction to neural networks,” CRC Press, Computers pp.234, (1997)
[27]  C. Tianfeng and R. Draxler, “Root mean square error (RMSE) or mean absolute error (MAE),” Geoscientific Model Development Discussions, vol.7, no.1, (2014), DOI: 10.5194/gmdd-7-1525-2014(CrossRef)(Google Scholar)
[28]  M. Dorigo and M. Gambardella, “Ant colony system: A cooperative learning approach to the traveling salesman problem,” IEEE Transactions on Evolutionary Computation, vol.1, no.1, pp.53-66, (1997), DOI: 10.1109/4235.585892(CrossRef)(Google Scholar)
[29]  G. Di Caro and M. Dorigo, “AntNet: A mobile agent’s approach to adaptive routing,” Technical report IRIDIA/97-12, IRIDIA, Universite´ Libre de Bruxelles, Brussels, (1997)
[30]  O. Martin, S. Otto, and E. Felten, “Large-step Markov chains for the TSP incorporating local search heuristics,” Operations Research Letters, vol.11, pp.219-224, (1992)
[31]  D. Johnson and L. McGeoch, “The travelling salesman problem: A case study in local optimization,” in Local Search in Combinatorial Optimization, New York: Wiley and Sons, (1997)
[32]  M. Fredman, D. Johnson, L. McGeoch, and G. Ostheimer, “Data structures for traveling salesmen,” Journal of Algorithms, vol.18, pp.432-479, (1995)
[33]  R. Vilalta and Y. Drissi, “A perspective view and survey of meta-learning,” Artificial Intelligence Review, vol.18, no.2, pp77-95, (2002)

CITATION

  • APA:
    Al-akashi,F.(2021). Improving Learning Performance in Neural Networks. International Journal of Hybrid Information Technology, 14(1), 29-44. 10.21742/IJHIT.2021.14.1.02
  • Harvard:
    Al-akashi,F.(2021). "Improving Learning Performance in Neural Networks". International Journal of Hybrid Information Technology, 14(1), pp.29-44. doi:10.21742/IJHIT.2021.14.1.02
  • IEEE:
    [1] F.Al-akashi, "Improving Learning Performance in Neural Networks". International Journal of Hybrid Information Technology, vol.14, no.1, pp.29-44, Mar. 2021
  • MLA:
    Al-akashi Falah. "Improving Learning Performance in Neural Networks". International Journal of Hybrid Information Technology, vol.14, no.1, Mar. 2021, pp.29-44, doi:10.21742/IJHIT.2021.14.1.02
  • © 2021 Falah Al-akashi . Published by Global Vision Press - This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

ISSUE INFO

  • Volume 14, No. 1, 2021
  • ISSN(p):1738-9968
  • ISSN(e):2652-2233
  • Published:Mar. 2021

DOWNLOAD